I. AI Technologies | 12:00pm – 12:02pm
- What is AI?
- How does it work?
II. The Intersection of AI, Privacy, and Security: Key Considerations | 12:02pm – 12:06pm
- Processing beyond what is humanly possible
- Reidentification of data
- Blackbox security attacks
III. Privacy Challenges in AI: Data Collection, Usage, and Retention | 12:06pm – 12:10pm
- Employee Use
- Customer Facing
- DPIAs, PIAs
- Data processing and collection
- Who is the controller
IV. Security Risks in AI Systems: Vulnerabilities and Threats | 12:10pm – 12:15pm
V. Privacy-Preserving AI Techniques: An Overview | 12:15pm – 12:30pm
- Differential Privacy: Differential privacy is a widely adopted technique for privacy preservation in AI. It ensures that the output of an AI algorithm does not reveal information specific to any individual in the dataset. By introducing controlled noise or perturbations to the data, differential privacy helps prevent re-identification or inference of sensitive information.
- Federated Learning: Federated learning enables collaborative AI model training across distributed devices or data sources while keeping the data decentralized. Instead of transferring raw data to a central server, the AI model is trained locally on individual devices, and only model updates or aggregated gradients are shared. This approach minimizes the exposure of sensitive data while allowing the model to learn from diverse sources.
- Secure Multiparty Computation (MPC): Secure multiparty computation is a cryptographic technique that enables multiple parties to jointly compute a function while keeping their inputs private. In the context of AI, MPC allows training or inference on encrypted data without revealing the underlying data to any party involved. This approach ensures privacy even when different entities collaborate on AI tasks.
- Homomorphic Encryption: Homomorphic encryption is a form of encryption that enables computation on encrypted data without decrypting it. It allows AI models to operate on encrypted data, preserving privacy throughout the computation process. Homomorphic encryption ensures that the model’s output remains confidential, while the sensitive data remains encrypted.
- Privacy-Preserving Data Generation: Privacy-preserving data generation techniques involve creating synthetic or anonymized data that closely resembles the original dataset but does not contain identifiable information. Synthetic data generation methods, such as generative adversarial networks (GANs) or differential privacy-based mechanisms, enable the generation of realistic data while preserving privacy.
- Privacy-Preserving Machine Learning Algorithms: Certain machine learning algorithms have been designed specifically with privacy in mind. For example, secure decision trees and secure logistic regression algorithms aim to perform classification tasks while preserving the privacy of sensitive data. These algorithms incorporate privacy constraints during the learning process.
VI. Ethical Implications of AI: Privacy, Bias, and Discrimination | 12:30pm – 12:35pm
VII. Regulations and Legal Frameworks: Safeguarding Privacy in the AI Era | 12:35pm – 12:40pm
VIII. Building Trustworthy AI Systems: Privacy and Security by Design | 12:40pm – 12:45pm
IX. Privacy-Preserving Machine Learning: Techniques and Applications | 12:45pm – 12:50pm
X. AI, Privacy, and Healthcare: Balancing Data Insights and Patient Confidentiality | 12:50pm – 12:55pm
XI. The Future of AI | 12:55pm – 1:00pm