Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Aprenda Safe Data Visualization | Protecting Sensitive Data
Python Security Best Practices

bookSafe Data Visualization

When working with sensitive datasets, visualizations can inadvertently reveal confidential information. Risks include displaying personally identifiable information (PII), financial details, or other private data directly in charts or labels. Such exposures might occur through axis labels, legends, or even data points that can be traced back to individuals. To mitigate these risks, always assess what data is being visualized, avoid including direct identifiers, and consider whether aggregation or anonymization is needed before plotting.

123456789101112131415
import pandas as pd import matplotlib.pyplot as plt # Example DataFrame with sensitive information df = pd.DataFrame({ "Name": ["Alice Smith", "Bob Jones", "Carol White"], "Salary": [90000, 120000, 110000] }) # Plotting sensitive data directly plt.bar(df["Name"], df["Salary"]) plt.title("Employee Salaries") plt.xlabel("Employee Name") plt.ylabel("Salary ($)") plt.show()
copy

The code above plots employee salaries using their full names on the x-axis. This approach exposes both the names and financial details of individuals, creating a significant privacy risk. If such a chart is shared internally or externally, anyone viewing it can see exactly how much each named employee earns. This can lead to confidentiality breaches, legal risks, and loss of trust.

123456789101112131415161718
import pandas as pd import matplotlib.pyplot as plt # DataFrame with sensitive information df = pd.DataFrame({ "Department": ["Engineering", "Engineering", "HR"], "Salary": [90000, 120000, 110000] }) # Aggregate salaries by department to anonymize individuals salary_by_dept = df.groupby("Department")["Salary"].mean() # Plot aggregated data plt.bar(salary_by_dept.index, salary_by_dept.values) plt.title("Average Salary by Department") plt.xlabel("Department") plt.ylabel("Average Salary ($)") plt.show()
copy

By aggregating salaries by department, the visualization removes direct identifiers and prevents viewers from linking salary information to specific individuals. This approach protects privacy while still providing useful insights. Aggregation, generalization, or replacing identifiers with pseudonyms are effective ways to reduce the risk of exposing confidential data in visualizations.

Note
Study More

Data anonymization techniques include generalization, aggregation, pseudonymization, and masking. Explore resources on k-anonymity and differential privacy for advanced methods.

question mark

What is a risk of visualizing raw sensitive data?

Select the correct answer

Tudo estava claro?

Como podemos melhorá-lo?

Obrigado pelo seu feedback!

Seção 3. Capítulo 4

Pergunte à IA

expand

Pergunte à IA

ChatGPT

Pergunte o que quiser ou experimente uma das perguntas sugeridas para iniciar nosso bate-papo

Suggested prompts:

Can you explain more ways to anonymize data in visualizations?

What are some best practices for sharing sensitive visualizations?

How can I identify if my chart is exposing confidential information?

Awesome!

Completion rate improved to 5.56

bookSafe Data Visualization

Deslize para mostrar o menu

When working with sensitive datasets, visualizations can inadvertently reveal confidential information. Risks include displaying personally identifiable information (PII), financial details, or other private data directly in charts or labels. Such exposures might occur through axis labels, legends, or even data points that can be traced back to individuals. To mitigate these risks, always assess what data is being visualized, avoid including direct identifiers, and consider whether aggregation or anonymization is needed before plotting.

123456789101112131415
import pandas as pd import matplotlib.pyplot as plt # Example DataFrame with sensitive information df = pd.DataFrame({ "Name": ["Alice Smith", "Bob Jones", "Carol White"], "Salary": [90000, 120000, 110000] }) # Plotting sensitive data directly plt.bar(df["Name"], df["Salary"]) plt.title("Employee Salaries") plt.xlabel("Employee Name") plt.ylabel("Salary ($)") plt.show()
copy

The code above plots employee salaries using their full names on the x-axis. This approach exposes both the names and financial details of individuals, creating a significant privacy risk. If such a chart is shared internally or externally, anyone viewing it can see exactly how much each named employee earns. This can lead to confidentiality breaches, legal risks, and loss of trust.

123456789101112131415161718
import pandas as pd import matplotlib.pyplot as plt # DataFrame with sensitive information df = pd.DataFrame({ "Department": ["Engineering", "Engineering", "HR"], "Salary": [90000, 120000, 110000] }) # Aggregate salaries by department to anonymize individuals salary_by_dept = df.groupby("Department")["Salary"].mean() # Plot aggregated data plt.bar(salary_by_dept.index, salary_by_dept.values) plt.title("Average Salary by Department") plt.xlabel("Department") plt.ylabel("Average Salary ($)") plt.show()
copy

By aggregating salaries by department, the visualization removes direct identifiers and prevents viewers from linking salary information to specific individuals. This approach protects privacy while still providing useful insights. Aggregation, generalization, or replacing identifiers with pseudonyms are effective ways to reduce the risk of exposing confidential data in visualizations.

Note
Study More

Data anonymization techniques include generalization, aggregation, pseudonymization, and masking. Explore resources on k-anonymity and differential privacy for advanced methods.

question mark

What is a risk of visualizing raw sensitive data?

Select the correct answer

Tudo estava claro?

Como podemos melhorá-lo?

Obrigado pelo seu feedback!

Seção 3. Capítulo 4
some-alt