BDEGHILMNOPQ
1
Podcast TitleHost 1Host 2Podcast URLEpisode TitleEpisode DescEpisode URLPub DateLengthGuest 1Guest 2AI Notes
2
Decrypted UnscriptedDominique Shelton LeipzigDavid BidermanShowLinkFacial Recognition or Misidentification - Episode 24Massachusetts Activist Kade Crockford at the ACLU described to David and Dominique how Kade worked with Boston Celtics players to Urge Social Justice and Anti-Discrimination in that state’s facial recognition law.ListenJun 2, 202148 minKade CrockfordFacial Recognition Bias and Bans
3
AT -> GO AISundaraparipurnan NarayananShowLinkE1: Our Goal: Managing AI Risks for HumanityIn this episode, Ryan Carrier, founder of Forhumanity expressing about the mission. Ryan serves as ForHumanity’s Executive Director and Chairman of the Board of Directors, in these roles he is responsible for the day-to-day function of ForHumanity and the overall process of Independent Audit. He started this non-profit in 2016 with a mission to Managing AI risks for Humanity Ryan has conducted business in over 55 countries and is a frequent speaker at industry conferences around the world on the topic of audit of AI systems. The ForHumanity mission is one of the largest and notable crowdsourced efforts on audit of AI system Listen to the insightful podcast series Visit us at https://forhumanity.center/ to know moreListenMay 25, 20215 minRyna CarrierManaging AI Risk
4
AT -> GO AISundaraparipurnan NarayananShowLinkE2: #EUAIRegs: AI Regulations can help companies to stay accountableAT > GO, a short form podcast from ForHumanity. This series is about the recent draft EU regulations for AI. The ForHumanity fellows, leading international experts on AI will be interviewed by international hosts, and the fellows will share their thoughts about the regulations. The draft #EUAIRegs mandate classification of high risk AI and also require specific approaches to ensure that such AI systems do not harm people. This regulation has proposed a penalty of 6% of global revenues or Euro 30 million for violations. Merve Hickok, a ForHumanity fellow, is a founder of the Lighthouse Career Consulting - an advisory consultancy, working in the space of AI ethics and Responsible AI. She is an independent consultant, lecturer and speaker on AI ethics and bias & its implications. Merve states, we need to pay attention to the whole supply chain for AI and really ask the ethical questions to think about bias mitigation. These regulations will be a tool to help companies stay accountable. Other countries will see the regulations as a sign that it’s ok, we are starting now, and we can feel more comfortable about actually regulating AI. Visit us at https://forhumanity.center/ to know moreListenMay 25, 202112 minMerve HickokAI & the supply chain, bias mitigation
5
AT -> GO AISundaraparipurnan NarayananShowLinkE3: #EUAIRegs: Starting gun fired. A kickoff for Artificial Intelligence standardsAT > GO, a short form podcast from ForHumanity. This series is about the recent draft EU regulations for AI. The ForHumanity fellows, leading international experts on AI will be interviewed by international hosts, and the fellows will share their thoughts about the regulations. The draft #EUAIRegs mandate classification of high risk AI and also require specific approaches to ensure that such AI systems do not harm people. This regulation has proposed a penalty of 6% of global revenues or Euro 30 million for violations. Adam Lyon Smith is a specialist in software quality, continuous integration, and AI. He has held senior technology roles at several multinationals, delivering large complex projects. Chair of Specialist group in Software Testing. He was also the editor for standards including bias in AI system and quality model. He is a ForHumanity Fellow and a Board member of ForHumanity. In this episode Adam talks about his perspectives on EU AI regulations. Adam shares, “standards need to be applied� instead of just looking at regulation. He welcomes the regulation and believes that this will help in a robust and data driven approach. Read the regulation...this is the start, and not the end. “The starting gun has been fired for the standards committee.� Visit us at https://forhumanity.center/ to know moreListenMay 25, 202110 minAdam Lyon SmithSoftware standards for AI systems
6
Privacy AdvisorJedidiah BracyShowLinkExploring emotion-detection technology: A conversation with Ben BlandArtificial intelligence and machine learning technologies are rapidly developing across virtually all sectors of the global economy. One nascent field is empathic technology, which, for better or worse, includes emotion detection. It is estimated that the emotion detection industry could be worth $56 billion by 2024. However, judging a person's emotional state is subjective and raises a host of privacy, fairness, and ethical questions. Ben Bland has worked in the empathic technology space in recent years and now chairs the IEEE's P7014 Working Group to develop a global standard for the ethics of empathic technology. We recently caught up to discuss the pros and cons of the technology and his work with IEEE.ListenMay 14, 202142 minBen BlandEmpathic technology
7
Data Diva Talks PrivacyDebbie ReynoldsShowLinkE27 - Dawn Kristy Cyber Dawn and Cyber SolutionsDebbie Reynolds, "The Data Diva,” talks to Dawn Kristy, CEO of The Cyber
Dawn and VP of Cyber Solutions a CyberArmada. We discuss her passion for
cybersecurity, challenges in AI systems, her work on educating people about
cybersecurity, the big targets and the most vulnerable in cyber attacks,
the need for privacy and cybersecurity with those outside of technology and
legal fields, the impact of the documentary “Coded Bias,” the lack of
diversity in AI and how it impacts results, AI mistakes that may lead to
irreversible consequences, male advocates for women in tech, the need for
more women and girls in STEM, thoughts on the proposed Mexican National
Biometric Registry, the trend toward justification of data collection in
laws and her idea for data privacy in the future.
ListenMay 11, 202141 minDawn KristyChallenges in AI systems
8
Cyberlaw PodcastStewart BakerShowLinkEpisode 361: Computers Will Soon Be Hacking Us. If They Aren’t Already.Bruce Schneier joins us to talk about AI hacking in all its forms. He's particularly interested in ways AI will hack humans, essentially preying on the rough rules of thumb programmed into our wetware – that big-eyed, big-headed little beings are cute and need to have their demands met or that intimate confidences should be reciprocated. AI may not even know what it's doing, since machines are famous for doing what works unless there's a rule against it. Bruce is particularly interested in law-hacking – finding and exploiting unintended consequences buried in the rules in the U.S. Code. If any part of that code will lend itself to AI hacking, Bruce thinks, it's the tax code (insert your favorite tax lawyer joke here). It's a bracing view of a possible near-term future.; In the news, Nick Weaver and I dig into the Colonial Pipeline ransomware attack and what it could mean for more aggressive cybersecurity action in Washington than the Biden administration was contemplating just last week as it was pulling together an executive order that focused heavily on regulating government contractors.; Nate Jones and Nick examine the stalking flap that is casting a cloud over Apple's introduction of AirTags.; Michael Weiner takes us through a quick tour of all the pending U.S. government antitrust lawsuits and investigations against Big Tech. What's striking to me is how much difference there is in the stakes (and perhaps the prospects for success) depending on the company in the dock. Facebook faces a serious challenge but has a lot of defenses. Amazon and Apple are being attacked on profitable but essentially peripheral business lines. And Google is staring at existential lawsuits aimed squarely at its core business.; Nate and I mull over the Russian proposal for a UN cybercrime proposal. The good news is that stopping progress in the UN is usually even easier than stopping legislation in Washington.; Nate and I also puzzle over ambiguous leaks about what DHS wants to do with private firms as it tries to monitor extremist chatter online. My guess: This is mostly about wanting the benefit of anonymity or a fake persona while monitoring public speech.; And then Michael takes us into the battle between Apple and Fortnite over access to the app store without paying the 30% cut demanded by Apple. Michael thinks we've mostly seen the equivalent of trash talk at the weigh-in so far, and the real fight will begin with the economists' testimony this week.  Nick indulges a little trash talk of his own about the claim that Apple’s app review process provides a serious benefit to users, citing among other things the litigation-driven disclosure that Apple never send emails to users of the 125 million buggered apps it found a few years back.; Nick and I try to make sense of stories that federal prosecutors in 2020 sought phone records for three Washington Post journalists as part of an investigation into the publication of classified information that occurred in 2017.; I try to offer something new about the Facebook Oversight Board's decision on the suspension of President Trump’s account. To my mind, a telling and discrediting portion of the opinion reveals that some of the board members thought that international human rights law required more limits on Trump's speech – and they chose to base that on the silly notion that calling the coronavirus a Chinese virus is racist. Anyone who has read Nicholas Wade's careful article knows that there's lots of evidence the virus leaked from the Wuhan virology lab. If any virus in the last hundred years deserves to be named for its point of origin, then, this is it. Nick disagrees.; Nate previews an ambitious task force plan on tackling ransomware. We'll be having the authors on the podcast soon to dig deeper into its nearly 50 recommendations.; Signal is emerging a Corporate Troll of the Year, if not the decade. Nick explains how, fresh from trolling Cellebrite, Signal took on Facebook by creating a bevy of personalized Instagram ads that take personalization to the Next Level. Years after the fact, the New York Attorney General has caught up with the three firms that generated fake comments opposing the FCC's net neutrality rollback. They'll be paying fines. But I can't help wondering why anyone thinks it's useful to think about proposed rules by counting the number of postcards and emails that shout "yes" or "no" but offer no analysis.; And more!; As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!; The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.ListenMay 9, 202174 minBruce SchneierAI hacking
9
PivotKara SwisherScott GallowayShowLinkFacebook Oversight Board and the Trump ban, trouble for Peloton and a listener question about AIKara and Scott talk about the Facebook Oversight Board's decision to uphold Facebook's ban of Donald Trump's Facebook account... for the next six months. Then they talk about Peloton recalling, their treadmills, and how the company handled the crisis. In listener mail, we get a question about how Google has dealt with ethics as it relates to artificial intelligence. And in predictions, we have more thoughts on Dogecoin.
Learn more about your ad choices. Visit podcastchoices.com/adchoices
ListenMay 7, 202155 minHow Google has dealt with ethics as it relates to AI
10
That Tech PodLaura MilsteinGabriela SchulteAudio, Video, and AI with Ian CampbellToday on That Tech Pod, Laura and Gabi talk to Ian Campbell. Ian Campbell is President & CEO of iCONECT, a global market leader in producing cutting-edge innovative eDiscovery Document Review software used by leading law firms in some of the world’s most complex legal cases. He is responsible for sales operations and business development, product lifecycle development and partner relations. A key priority is cross-marketing the iCONECT platform to the plaintiff, medical, government, corporate, and police services industries.

Follow That Tech Pod: Twitter-@thattechpod LinkedIn: LinkedIn.com/thattechpod website: thattechpod.com
ListenMay 4, 202130 minIan Campbell iCONECTAudio, Video, and AI
11
Data Diva Talks PrivacyDebbie ReynoldsShowLinkE26 - Pedro Pavón FacebookDebbie Reynolds, "The Data Diva,” talks to Pedro Pavón, who works on
Privacy, Fairness, and Data Policy, at Facebook. We discuss being authentic
and experiences of being a person of color in the data privacy field, the
vital human elements of privacy as it is thought of differently all over
the world, the Netflix documentary Coded Bias featuring Dr. Joy Buolamwini,
the interplay of privacy and technology, bias in AI and technology, and how
this relates to data privacy, facial recognition, and surveillance, the
problem of using statistics to measure AI success and human harm, AI and
decisions about life and liberty, the impact of using facial recognition in
policing and education, the problem of applying technology made for one
purpose to a different purpose, the need to constantly monitor and correct
AI systems, the challenge of information silos within an organization with
data privacy, and his wish for data privacy in the future.
ListenMay 4, 202142 minPedro PavónBias in AI, AI and decisions about life and liberty
12
Cyberlaw PodcastStewart BakerShowLinkEpisode 360: The Robot Apocalypse and YouOur interview is with Kevin Roose, author of Futureproof: 9 Rules for Humans in the Age of Automation debunks most of the comforting stories we use to anaesthetize ourselves to the danger that artificial intelligence and digitization poses to our jobs. Luckily, he also offers some practical and very personal ideas for how to avoid being caught in the oncoming robot apocalypse.; In the news roundup, Dmitri Alperovitch and I take a few moments to honor Dan Kaminsky, an extraordinary internet security and even more extraordinarily decent man. He died too young, at 42, as Nicole Perlroth demonstrates in one of her career-best articles.; Maury Shenk and Mark MacCarthy lay out the EU's plan to charge Apple with anti-competitive behaviour in running its app store.; Under regulation-friendly EU competition law, the more austere U.S. version, it sure looks as though Apple is going to have trouble escaping unscathed.; Mark and I duke it out over Gov. DeSantis's Florida bill on content moderation reform.; We agree that it will be challenged as a violation of the First Amendment and as preempted by federal section 230. Mark thinks it will fail that test. I don’t, especially if the challenge ends up in the Supreme Court, where Justice Thomas at least has already put out the "Welcome" mat.; Dmitri and I puzzle over the statement by top White House cyber official Anne Neuberger that the U.S. reprisals against Russia are so far not enough to deter further cyberattacks. We decide it's a "Kinsley gaffe" – where a top official inadvertently utters an inconvenient truth.; This Week in Information Operations: Maury explains that China may be hyping America’s racial tensions not as a tactic to divide us but simply because it’s an irresistible comeback to U.S. criticisms or Chinese treatment of ethnic minorities. And Dmitri explains why we shouldn’t be surprised at Russia's integrated use of hacking and propaganda. The real question is why the US has been so bad at the same work.; In shorter stories: Mark covers the slooow rollout of an EU law forcing one-hour takedowns of terrorist content; Dmitri tells us about the evolution of ransomware into, full-service doxtortion as sensitive files of the C. Police Department are leaked online; Dmitri also notes the inevitability of more mobile phone adtech tracking scandals, such as the compromise of US military operations; Maury and I discuss the extent to which China's internet giants find themselves competing, not for consumers, but for government favor, as China uses antitrust law to cement its control of the tech sector; Finally, Dmitri and I unpack the latest delay in DOD's effort to achieve cybersecurity maturity through regulatory-style compliance, an effort Dmitri believes is doomed; And more!; As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!; The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.ListenMay 3, 202169 minKevin RooseRobot Apocalypse
13
BarCodeChris GlandenShowLinkThe Flying Fish Theory with Rohan LightA Flying Fish will leap out of the water and use its winglike pectoral fins to glide over the surface. Then, once below the surface, it is out of site and flows amongst the others in different directions until it appears above water again.

Rohan Light is an expert on governance, strategy and risk capability throughout the data, evidence and decision management value chain. He is also well versed in Artificial Intelligence, trusted data use and platform governance. He, along with special co-host Mike Elkins, join me to discuss his Flying Fish Theory of Change Model, AI Ethics, and more.

Tony the Bartender assigns BoozeBOT to serve up a "Mojito" that's off the hook.

Support the show (https://paypal.me/thebarcodepodcast)
ListenApr 30, 202147 minRohan LightFlying Fish Theory of Change Model, AI Ethics
14
CaveatDave BittnerBen YelinShowLinkCyber insurance: still a work in progress.Our guests this week are Paul Moura and David Navetta from Cooley who share their thoughts on the importance of cyber insurance with Dave, Ben has the story of a lawsuit from some WeChat users, and Dave wonders if the FTC is cracking down on AI.
While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. 
Links to stories:

California lawsuit against Chinese tech giant raises a thorny question: Can the plaintiffs remain anonymous?

Aiming for truth, fairness, and equity in your company’s use of AI

Twitter post about FTC blog post

Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com or simply leave us a message at (410) 618-3720. Hope to hear from you.
ListenApr 28, 202138 minPaul Moura, CooleyDavid Navetta, CooleyFTC crack down on AI?
15
Serious PrivacyPaul BreitbarthK RoyalShowLinkOh what a week in privacy with Paul and KIn this episode of Serious Privacy, Paul Breibarth and K Royal tackle the slew of development (or non-developments) in privacy around the world. What a week in privacy! We had the proposal for AI Regulation published in the EU, the UK adequacy opinion, and of course, several privacy bills in states around the US, and the United States Supreme Court decision in AMG Capital Management, LLC et al. v. Federal Trade Commission, decided the morning of the episode recording.

The AI proposal has garnered much conversation, such as in this article by Politico and the summary by Dr. Gabriela Zanfir-Fortuna of the Future of Privacy Forum. Paul and K discuss various aspects of the proposal including a few unexpected recommendations, or lack thereof. However, the UK adequacy opinion was not as surprising, but quite interesting.

Once we turned to the US and state privacy bills, the end was near for several key states, and by the time this episode is live, we know that the Washington bill is dead once again. However, there remains hope for a couple of others given the dates of when sessions end, such as Florida - which we should know in a few days - it is scheduled for its third reading at this time. About 15 states still had bills at the time (see webinar on update by TrustArc on state privacy bills), and of course, the next legislative season may see more change.

The FTC decision by the USSC was top of mind given its impact on FTC authority, which also led to discussions of the federal privacy bill by Rep. DelBene which proposes quite an expansion of FTC authority. Please see this statement released by the FTC on the matter. This case was reminiscent of a prior case with LabMD (yes, different enforcement actions, but still speaking to FTC authority).

Join us as we discuss these developments and more in this episode of Serious Privacy. As always, if you have comments or feedback, please contact us at seriousprivacy@trustarc.com.
ListenApr 28, 202138 minEU proposal for AI Regulation published
16
Data Diva Talks PrivacyDebbie ReynoldsShowLinkE25 - Carissa Véliz Associate Professor at University of Oxford AuthorDebbie Reynolds "The Data Diva" talks to Carissa Véliz, Associate
Professor, Faculty of Philosophy, Institute for Ethics in AI and Tutorial
Fellow, Hertford College, University of Oxford, and author of "Privacy Is
Power.” We discuss her concept of data havens, the success of her book
“Privacy Is Power,” how Brexit affected her views on privacy, data access
creating new caste systems, individual steps to protect one’s privacy, the
need for agency over one’s data, the danger of inferences in data use,
Cambridge Analytica’s impact on privacy discussions, the effects of Covid
on data privacy, the differences between US and EU, ethics and AI, bias in
AI, facial recognition and biometrics and her idea for data privacy in the
future.
ListenApr 27, 202138 minCarissa VélizEthics and AI, bias
17
Law BytesMichael GeistEpisode 85: Céline Castets-Renard on Europe's Plan to Regulate Artificial IntelligenceLast week, the European Commission launched what promises to be a global, multi-year debate on the regulation of artificial intelligence. Several years in development, the proposed rules would ban some uses of AI, regulate others, and establish significant penalties for those that fail to abide by the rules. European leaders believe the initiative will place them at the forefront of AI, borrowing from the data protection framework of seeking to export EU solutions to the rest of the world. Céline Castets-Renard is a colleague at the University of Ottawa, where she holds the University Research Chair on Accountable Artificial Intelligence in a Global World. She joins the Law Bytes podcast to discuss the EU plans, their implications for Canadian AI policy, and the road ahead for the regulation of artificial intelligence.ListenApr 26, 202132 minCéline Castets-Renard, University of OttawaRegulation of artificial intelligence
18
Masters of PrivacySergio MaldonadoShowLinkNewsroom S16: la Inteligencia Artificial sale a la palestra y Facebook asalta los podcastsSe publica el borrador oficial del Reglamento UE para el uso de la Inteligencia Artificial, siguen surgiendo alternativas a las cookies de tercera parte (al tiempo que no se detiene el fraude en publicidad programática) y Apple saca ventaja a su defensa de la privacidad.

Además: Facebook lanza una batería de soluciones para podcasting, “Social Audio” y un muro de mensajes grabados. Empieza el “todos contra todos” en la batalla por el futuro de los medios.

Con Cris Moro y Sergio Maldonado
ListenApr 23, 202118 minOfficial draft of EU Reg for the use of AI published
19
GDPR Weekly ShowKeith BuddenShowLinkGDPR Weekly Show Episode 140 :- EU-UK Data Adequacy, Covid-19, DPC Facebook, Estate Agents, Uber, Android, iOS, Legal Obligation, Artificial Intelligence, Manhunt, SpotlessComing up in this week's episode: EU-UK Data Adequacy decision moves one step closer,
GDPR hindering sharing of Covid-19 research data,
Irish DPC issues statement re Facebook,
Estate Agents virtual property viewings - a data breach waiting to happen?
Uber dismissal algorithm challenged in court in Amsterdam and London,

Android and iOS issue guidance on player data collection,

Does Legal Obligation trump user rights under GDPR,

Artificial Intelligence policies developing in line with GDPR,

Manhunt data breach,

Spotless data breach in New Zealand
ListenApr 17, 202130 minAI policies developing in line with GDPR
20
FIT4PRIVACYPunit BhatiaShowLink029 AI and Privacy: How To Find Balance - A Conversation between Eline Chivot and Punit Bhatia (Full Episode)In this episode, Punit is joined by Eline Chivot and they have a conversation about AI & Privacy. Specifically, they both discuss the book "AI & Privacy: How to find balance" that they have co-authored.

_About Eline Chivot_

French by origin, and a political scientist by training, Eline Chivot is passionate about policy and politics. She has expertise in the technology sector and digital policy. She covers issues related to the data economy which are as diverse as content liability, moderation, disinformation, data protection and privacy, emerging technologies like AI, competition policy, and antitrust. She consolidates and fosters knowledge, analyses, and insights for, with, and within organizations regarding the public policies that impact these topics; she advises them on the perspectives that co-exist in the digital economic sphere, and on how policies and the legal power interact with economic and industrial realities. Her experience also includes strategic consulting and leading research projects in international relations, defense, security, and economic policy.

Eline enjoys public speaking and writing. She is the author of hundreds of publications ranging from reports, filings, op-eds, and other short articles. She was published in the Financial Times, quoted in publications such as the New York Times, the Washington Post, and Politico, gave interviews for podcasts and magazines, and appeared in televised events such as the international TV channel France24.

Throughout her years of experience especially in Brussels, Eline has gained a strong knowledge of EU institutions and its actors and developed various relations within and beyond the so-called "Brussels bubble," with policy officials, lawmakers, industry representatives, NGOs, and others. She finds drive, motivation, support, and inspiration from this community every day.

_About Punit Bhatia_

Punit Bhatia is a leading privacy expert who has worked with professionals in over 30 countries. Punit guides business and privacy leaders on GDPR-based privacy compliance through online and in-person training and consulting.

Punit is the author of many books including "Be Ready for GDPR" which is rated as one of the best GDPR Books of All Time by Book Authority. Punit is also the founder and host of the FIT4PRIVACY Podcast which includes conversations with industry influencers and shares opinions on privacy matters. --- Send in a voice message: https://anchor.fm/fit4privacy/message
ListenApr 14, 202145 minEline ChivotAI and Privacy: How To Find Balance
21
Decrypted UnscriptedDominique Shelton LeipzigDavid BidermanShowLinkOnline Privacy and Eliminating Child Sexual Abuse Materials From the Internet - Episode 18Join us for a gripping conversation with Julie Cordua, chief executive officer of Thorn, an anti-human trafficking organization founded by Ashton Kutcher and Demi Moore. Thorn's mission is to develop technology to defend children from abuse online and to remove all child sexual abuse material from the internet. Thorn's technology has helped identify more than 14,000 child victims of abuse and has reduced investigative time by more than 65%.

Julie will discuss how Thorn's technology has clashed with European privacy officials, who have objected to the use of facial recognition and other technology utilized by Thorn to identify and rescue victims of child sexual abuse and pornography. Also, see Julie's TedTalk "How we can eliminate child sexual abuse material from the internet.":
https://www.ted.com/speakers/julie_cordua
You can find more about Thorn at https://www.thorn.org/
ListenApr 13, 202146 minJulie Cordua, ThornFacial recog and anti-human trafficking
22
The Data DropShowLinkData Drop News for Thursday, April 8, 2021Go to our episode post to subscribe to our podcast and get all the story links: https://www.datacollaboration.org/post/the-data-drop-news-for-thursday-april-8-2021

In this episode:

- EU vaccine passport data framework

- Calling for ban on facial recognition

- "Cybervetting" privacy risks

- Watching employees WFH?

- Plus this week's drop shots

The Data Drop News is a production of the Data Collaboration Alliance, a nonprofit working to advance data ownership through pilot projects in sustainability, healthcare, education, and social inclusion, as well as free training in the data collaboration methodology. Visit datacollaboration.org.
ListenApr 8, 20213 minCalling for ban on facial recog
23
FIT4PRIVACYPunit BhatiaShowLink028 The FIT4PRIVACY Podcast with Eline Chivot and Punit Bhatia (Full Episode) - AI & PrivacyIn this episode, Punit is joined by Eline Chivot and they have a conversation about AI & Privacy. Specifically, they touch upon the overarching challenges when it comes to AI & privacy, the challenges that business leaders and companies have to deal with, why readers will enjoy their book AI & Privacy and some of the points to pay attention to when working.

_About Eline Chivot_

French by origin, and a political scientist by training, Eline Chivot is passionate about policy and politics. She has expertise in the technology sector and digital policy. She covers issues related to the data economy which are as diverse as content liability, moderation, disinformation, data protection and privacy, emerging technologies like AI, competition policy, and antitrust. She consolidates and fosters knowledge, analyses, and insights for, with, and within organizations regarding the public policies that impact these topics; she advises them on the perspectives that co-exist in the digital economic sphere, and on how policies and the legal power interact with economic and industrial realities. Her experience also includes strategic consulting and leading research projects in international relations, defense, security, and economic policy.

Eline enjoys public speaking and writing. She is the author of hundreds of publications ranging from reports, filings, op-eds, and other short articles. She was published in the Financial Times, quoted in publications such as the New York Times, the Washington Post, and Politico, gave interviews for podcasts and magazines, and appeared in televised events such as the international TV channel France24.

Throughout her years of experience especially in Brussels, Eline has gained a strong knowledge of EU institutions and its actors and developed various relations within and beyond the so-called "Brussels bubble," with policy officials, lawmakers, industry representatives, NGOs, and others. She finds drive, motivation, support, and inspiration from this community every day.

_About Punit Bhatia_

Punit Bhatia is a leading privacy expert who has worked with professionals in over 30 countries. Punit guides business and privacy leaders on GDPR-based privacy compliance through online and in-person training and consulting.

Punit is the author of many books including "Be Ready for GDPR" which is rated as one of the best GDPR Books of All Time by Book Authority. Punit is also the founder and host of the FIT4PRIVACY Podcast which includes conversations with industry influencers and shares opinions on privacy matters. --- Send in a voice message: https://anchor.fm/fit4privacy/message
ListenApr 7, 202127 minEline ChivotAI & Privacy
24
Cyberlaw PodcastStewart BakerShowLinkEpisode 356: Who Minds the GapOur interview is with Kim Zetter, author of the best analysis to date of the weird messaging from NSA and Cyber Command about the domestic "blind spot" or "gap" in their cybersecurity surveillance. I ask Kim whether this is a prelude to new NSA domestic surveillance authorities (definitely not, at least under this administration), why the gap can't be filled with the broad emergency authorities for FISA and criminal intercepts (they don't fit, quite), and how the gap is being exploited by Russian (and soon other) cyberattackers. My most creative contribution: maybe AWS, where most of the domestic machines are being spun up, would trade faster cooperation in targeting such machines for a break on the know-your-customer rules they may otherwise have to comply with. And if you haven't subscribed to Kim's (still free for now) substack newsletter, you're missing out.; In the news roundup, we give a lick and a promise to today's Supreme Court decision in the fight between Oracle and Google over API copyrights, but Mark MacCarthy takes us deep on the Supreme Court's decision cutting the heart out of most, class actions for robocalling. Echoing Congressional Dems, Mark thinks the Court's decision is too narrow. I think it's exactly right. We both expect Congress to revisit the law soon.; Nick Weaver and I explore the fuss over vaccination passports and how Silicon Valley can help. Considering what a debacle the Google and Apple effort on tracing turned into, with a lot of help from privacy zealots, I'm pleased that Nick and I agree that this is a tempest in a teapot. Paper vax records are likely to be just fine most of the time. That won't prevent privacy advocates from trying to set unrealistic and unnecessary standards for any electronic vax records system, more or less guaranteeing that it will fall of its own weight. Speaking of unrealistic privacy advocates, Charles-Albert Helleputte explains why the much-touted GDPR privacy regime is grinding to a near halt as it moves from theory to practice. Needless to say, I am not surprised.; Mark and I scratch the surface of Facebook's Fairness Flow for policing AI bias. Like anything Facebook does, it's attracted heavy criticism from the left, but Mark thinks it's a useful, if limited, tool for spotting bias in machine learning algorithms. I'm half inclined to agree, but I am deeply suspicious of the confession in one "model card" that the designers of an algorithm for identifying toxic speech seem to have juiced their real-life data with what they call "synthetic data" because "real data often has disproportionate amounts of toxicity directed at specific groups." That sure sounds as though the algorithm relying on real data wasn't politically correct, so the researchers just made up data that fit their ideology and pretended it was real – an appalling step for scientists to take with little notice. I welcome informed contradiction.; Nick explains why there's no serious privacy problem with the IRS subpoena to Circle, asking for the names of everyone who has more than $20 thousand in cryptocurrency transactions. Short answer: everybody who doesn't deal in cryptocurrency already has their transactions reported to the IRS without a subpoena.; Charles-Albert and I not that the EU is on the verge of finding that South Korea's data protection standards are "adequate" by EU standards. The lesson for the US and China is simple: The Europeans aren't looking for compliance; they're looking for assurances of compliance. As Fleetwood Mac once sang, "Tell me lies, tell me sweet little lies.;" Mark and I note the extreme enthusiasm with which the FBI used every high-tech tool to identify even people who simply trespassed in the Capitol on January 6. The tech is impressive, but we suspect a backlash is coming. Nick weighs in to tell me I'm wrong when I argue that we didn't see these tools used this way against ANTIFA's 2020 rioters.; Nick thinks we haven't paid enough attention to the Accellion breach, and I argue that companies are getting a little too comfortable with aggressive lawyering of their public messages after a breach. One result is likely to be a new executive order about breach notification (and other cybersecurity obligations) for government contractors, I predict.; And Charles and I talk about the UK's plan to take another bite out of end-to-end encryption services, essentially requiring them to show they can still protect kids from sexual exploitation without actually reading the texts and pictures they receive.; Good luck with that!; And more.; The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.ListenApr 5, 202169 minKim ZetterFacebook's Fairness Flow for policing AI bias
25
Privacy News OnlinePrivacy News Online: A Surveillance World: monitored by smartphone apps, AI webcams & advertisingPrivacy news from Private Internet Access VPN: https://privateinternetaccess.com/PNO36 For A Limited Time Only: Sign up now and 2 months FREE!

To read more about the privacy news stories in this episode, please go to: https://www.privateinternetaccess.com/blog/privacy-news-online-weekly-review-april-1-2021/

Watch this episode on YouTube: https://www.youtube.com/watch?v=KkAvDmL8McA

Cybersecurity News with Josh Long is sponsored by Intego. Save on Intego's world-class protection software for Mac/Windows: https://www.intego.com/lp/route-privacynews/?channel=privacynews

Privacy news stories in this episode:

* Are smartphone apps that constantly monitor a person's movements and actions the future of parole – and parenting?
* Everything you wanted to know about "surveillance advertising" – and how to avoid it
* Amazon sees record government demand for user data in 2020
* 'Missing from desk': AI webcam raises remote surveillance concerns
* "System Update" app for Android is Malware in Disguise
* "Fleeceware" on the rise in Google Play and iOS App Stores
* How to manage iTunes Store and App Store subscriptions
ListenApr 5, 20214 minAI webcam raises remote surveillance concerns
26
EDRM Global Podcast Network
Mary MackKaylee WalstadShowLinkIllumination Zone: Mary Mack & Kaylee Walstad Sit Down with Relativity's Olivia Mulvey, Trish Gleason and Dr. Rebecca BurweiMary & Kaylee talk with Relativity's Olivia Mulvey, Trish Gleason and Dr. Rebecca Burwei about Relativity's use of AI to inform COVID-19 researchers and the creative use of tools currently and soon to be available in the Relativity platform, including addressing bias in the data set through near-deduplication, abstracts and more.ListenApr 1, 202119 minOlivia Mulvey, RelativityTrish Gleason, RelativityUse of AI to inform COVID-19 researchers
27
Machine EthicsBen ByfordShowLink54. The business of AI ethics with Josie YoungThis episode we're chatting with the amazing Josie Young on making businesses more efficient, how the AI ethics landscape changed over the last 5 years, ethics roles and collaborations, feminist AI and chatbots, responsible AI at Microsoft, ethics push back from teams and selling in AI ethics, disinformation's risk to democracy and more...ListenMar 27, 202151 minJosie YoungAI ethics landscape, feminist AI, responsible AI
28
Privacy News OnlinePrivacy News Online: Algorithms vs Privacy, Clearview AI lawsuit, Google tracking lawsuit & morePrivacy news from Private Internet Access VPN: https://privateinternetaccess.com/PNO34
For A Limited Time Only: Sign up now and 2 months FREE!

To read more about the privacy news stories in the video, please go to: https://www.privateinternetaccess.com/blog/privacy-news-online-weekly-review-march-19-2021/

Cybersecurity News with Josh Long is sponsored by Intego. Save on Intego's world-class protection software for Mac/Windows:
https://www.intego.com/lp/route-privacynews/?channel=privacynews

*Privacy news stories in this episode: *

* Algorithmic bias: how automated decision making has become an assault on privacy
* A federal judge ruled that Google must face $5B lawsuit over tracking people
* 52% of apps share your data – here are the worst offenders
* Lawsuit Challenges Clearview's Use of Scraped Social Media Images for Facial Recognition usage data to advertisers unless you opt out
ListenMar 26, 20215 minClearview AI lawsuit
29
Data Privacy PodcastThomas McNamara
ShowLinkLeaders In Privacy Tech
To continue The Data Privacy Podcast's Leaders in Privacy Tech series, Tom is joined by Jeremy McGee, the VP of Solutions Architecture at LeapYear Technologies. LeapYear is the world's first platform for differentially private reporting, analytics, and machine learning, embedding mathematically proven privacy into every statistic, computation, and model.
ListenMar 26, 202141 min
Jeremy McGee, LeapYear Technologies
LeapYear Technologies
30
Law and CandorBill MarianoRob HellewellShowLinkAI and Analytics for Corporations: Common Use CasesLaw & Candor co-hosts Bill Mariano and Rob Hellewell kick things off with Sightings of Radical Brilliance, in which they discuss the growing use of emotion recognition in tech in China and how this could lead to some challenges in the legal space down the road.

In this episode, Bill and Rob are joined by Moira Errick of Bausch Health. The three of them discuss common AI and analytics use cases for corporations via the following questions:

* What types of AI and analytics tools are you using and for what use cases?
* What is ICR and how you have been leveraging this internally?
* What additional use cases are you hoping to use AI and analytics for in the future?
* What are some best practices to keep in mind when leveraging AI and analytics tools?
* What recommendations do you have for those trying to get their team on board?
* What advice would you give to other women in the ediscovery industry looking to move their careers forward?

In conclusion, our co-hosts end the episode with key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage, rate us on Apple and Stitcher, and join in the conversation on Twitter.

Related Links

* Blog Post: AI and Analytics: New Ways to Guard Personal Information
* Blog Post:Building Your Case for Cutting-Edge AI and Analytics in Five Easy Steps
* Blog Post: Advanced Analytics – The Key to Mitigating Big Data Risks
* Podcast Episode:The Future is Now – AI and Analytics are Here to Stay

About Law & Candor

Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, visit the podcast homepage.
ListenMar 24, 202125 minMoira Errick, Bausch HealthCommon AI use cases for corporations
31
Law and CandorBill MarianoRob HellewellShowLinkKeeping Up with M365 Software UpdatesIn the fourth episode of the seventh season, co-hosts Bill Mariano and Rob Hellewell discuss why diversity in AI is important and how this could impact legal outcomes and decisions.

Next, they introduce their guest speaker, Jamie Brown of Lighthouse, who uncovers key strategies to keep up with the constant flow of Microsoft 365 software updates. Jamie answers the following questions (and more) in this episode:

* What are some of the common challenges associated with M365's rapid software updates?
* How do these constant updates lead to compliance risks?
* What are some best practices for overcoming these challenges?
* What recommendations would you pass along to those who are experiencing these challenges?
* What advice would you give to other women in the ediscovery industry looking to move their careers forward?

Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, learn more about our speakers and subscribe on the podcast homepage, rate us on Apple and Stitcher, and join in the conversation on Twitter.

Related Links

* Blog Post: Key Compliance & Information Governance Considerations As You Adopt Microsoft Teams
* Blog Post: Achieving eDiscovery Compliance Amidst the Ever-Evolving Cloud Landscape
* Press Release: Lighthouse Launches CloudCompass for Microsoft 365 at Legaltech New York

About Law & Candor

Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, visit the podcast homepage.
ListenMar 24, 202119 minJamie Brown, LighthouseWhy diversity in AI is important, impact on legal outcomes
32
Machine EthicsBen ByfordShowLink53. Comedy and AI with Anthony JeannotA laid back episode of the podcast where Anthony and I chat about Netflix and recommender systems, finding comedy in AI, AI written movies and theatre, human content moderation, bringing an AI Ben back from the dead, constructing jokes recursively and much more...ListenMar 22, 202145 minAnthony JeannotComedy and AI
33
Untangling the WebNoshir ContractorShowLinkWeb Science Challenges in India with Ravindran BalaramanRavindran (Ravi) Balaraman is our guest for this episode (23 min. long). He is the Mindtree faculty fellow and a professor in the Department of Computer Science and Engineering at the Indian Institute of Technology Madras. And he also heads the Robert Bosch Centre for Data Science and Artificial Intelligence at IIT Madras, which is the leading interdisciplinary AI research center in India and India's first lab to join the Web Science Trust Network of laboratories from around the world. His research is pushing the boundaries of reinforcement learning, social network analysis, and data text mining.

In this episode, Ravi explains the unique challenges that India faces in web science, including how displaced migrants feeling alienated from the Web. He also explains some solutions, like increasing access to devices with local languages programmed in. And Ravi talks about the importance of AI and how that can help or hurt in pursuits of different social goods, as well as just how explainable AI could get. To hear about all this and more, listen to the full episode!

Click here for this episode's transcript, and click here for this episode's show notes.
ListenMar 19, 202123 minRavindran Balaraman, MindtreeAI can help/hurt in pursuit of social goods, explainable AI
34
EDRM Global Podcast Network
Dave CohenShowLinkIllumination Zone: Fireside Chat, Dave Cohen with Kelly AthertonDave Cohen, Chair of EDRM's Project Trustees and partner at Reed Smith talks with Kelly Atherton, Consultant at Driven about the work of the AI Project team that she stewarded.ListenMar 18, 202124 minKelly Atherton, DrivenAI Project team at Reed Smith
35
Serious PrivacyPaul BreitbarthK RoyalShowLinkFDIC's Chief Innovation Officer: Paper Clips and PbD (Sultan Meghji)In this episode of Serious Privacy, K Royal and Paul Breitbarth host the new and first Chief Innovation Officer of the Federal Deposit Insurance Corporation (FDIC) in the US, Sultan Meghji. Sultan has a rich history as co-founder of Neocova which specializes in AI software for financial institutions, an adjunct professor at Washington University's Olin Business School, a scholar of the Carnegie Endowment for International Peace, and an alum of the FBI Phoenix Citizens Academy - where he met K over a decade ago. But as the first Chief Innovation Officer, the initial focus is on - what is his job description?

It is clear that Sultan's expertise flows across a broad span of what Serious Privacy's listeners are interested in, such as security and privacy by design, technological innovation in the financial services, and how the US fits into the global market. Given that Sultan is new to the role, he does not yet have any major policy initiatives to announce, but did provide a teaser on some tech innovation which we should see come out in the near future and which fulfills the FDIC's desire to advance financial technology on a rapid pace of adoption.

Join us as we discuss how the financial market has changed in the past few decades with artificial intelligence, cyberevents, and the ripples of the interconnectedness of the market and technology. We also peek into what the next few decades may look like, but the new normal that we are in, it is difficult to predict any certain future. We also discussed ransomware as a service, engineering resilience, and advantages of liberal democracies. Sultan did emphasize that he wants to hear from the public on ideas for or problems with financial services and technology and he can be reached at innovation@fdic.gov.

As always, if you have comments or feedback, please contact us at seriousprivacy@trustarc.com.
ListenMar 16, 202136 minSultan Meghji, FDICFinancial markets and AI
36
EDRM Global Podcast Network
Mary MackKaylee WalstadShowLinkIllumination Zone: Mary Mack & Kaylee Walstad Sit Down with Wendell Jisa and George SochaMary and Kaylee talk with Wendell Jisa, CEO and George Socha, Sr. VP Brand Awareness of EDRM partner, Reveal, on the occasion of their merger with leading AI Brainspace.ListenMar 15, 202117 minWendell Jisa, RevealGeorge Socha, RevealAI Brainspace
37
Cyberwire DailyShowLinkKeeping data confidential with fully homomorphic encryption. [Research Saturday]Guest Dr. Rosario Cammarota from Intel Labs joins us to discuss confidential computing. Confidential computing provides a secure platform for multiple parties to combine, analyze and learn from sensitive data without exposing their data or machine learning algorithms to the other party. This technique goes by several names — multiparty computing, federated learning and privacy-preserving analytics, among them. Confidential computing can enable this type of collaboration while preserving privacy and regulatory compliance.
The research and supporting documents can be found here:

Intel Labs Day 2020: Confidential Computing

Confidential Computing Presentation Slides

Demo video
ListenMar 13, 202125 minRosario Cammarota, Intel LabsML & homomorphic encryption
38
Data Diva Talks PrivacyDebbie ReynoldsShowLinkE18 - Kenya Dixon - Information Governance U.S. Government and Private SectorDebbie Reynolds, "The Data Diva," talks to Kenya Dixon, General Counsel and
Chief Operating Officer at Empire Technologies Risk Management Group, who
has also served as the Director of Information Governance for the White
House and Executive Office of the President. We discuss government
regulations related to managing data of U.S. citizens, the impact of facial
recognition use by the government on individuals, safeguards for using
facial recognition as evidence, the human side of cooperation from
individuals within organizations to be successful in information
government, how data privacy professionals can be more successful in
obtaining information from individuals within organizations, insights on
FOIA (Freedom of Information Act) data formulas and data privacy
challenges, government vs. consumer data collection and data dossiers,
perspectives about the U.S. government's Privacy Act of 1974, human vs.
consumer-based laws, the notion of privacy vs. security, the perspective of
privacy in the U.S., data collection preservation and The U.S. Presidential
Records Act of 1978, and her wish for data privacy in the future.
ListenMar 8, 202140 minKenya DixonImpact: Gov use on public, Safeguards: facial recog evidence
39
Cyberlaw PodcastStewart BakerShowLinkEpisode 352: A Lot of Cybersecurity Measures that Don't Work, and a Few that MightWe're mostly back to our cybersecurity roots in this episode, for good reasons and bad. The worst of the bad reasons is a new set of zero-day vulnerabilities in Microsoft’s Exchange servers. They've been patched, Bruce Schneier tells us, but that seems to have inspired the Chinese government hackers to switch their campaign from Stealth to Promiscuous Mode. Anyone who hasn't already installed the Microsoft patch is at risk of being compromised today for exploitation tomorrow.; Nick Weaver and Dmitri Alperovitch weigh in on the scope of the disaster and later contribute to our discussion of what to do about our ongoing cyberinsecurity. We're long on things that don't work. Bruce has pointed out that the market for software products, unfortunately, makes it entirely rational for industry to skimp on security while milking a product's waning sales. Voluntary information sharing, has failed Dmitri notes. In fact, as OODA Loop reported in a devastating chart, information sharing is one of half a dozen standard recommendations made in the last dozen commission recommendations for cybersecurity. They either haven't been implemented or they don't work.; Dmitri is hardly an armchair quarterback on cybersecurity policy. He's putting his money where his mouth is, in the form of the Silverado Policy Accelerator, which we discuss during the interview segment of the episode. Silverado is focused on moving the cybersecurity policy debate forward in tangible, sometimes incremental, ways. It will be seeking new policy ideas in cybersecurity, trade and the environment, and industrial policy. (The unifying theme is the challenge to the US posed by the rise of China and the inadequacy of our past response to that challenge.) But ideas are easy; implementation is hard. Dmitri expects Silverado to focus its time and resources both on identifying novel policy ideas and on ensuring those ideas are transformed into concrete outcomes.; Whether artificial intelligence would benefit from some strategic decoupling sparks a debate between me, Nick, Jane Bambauer, and Bruce, inspired by the final AI commission report. We shift from that to China's version of industrial policy, which seems to reflect Chinese politics in its enthusiasm not just for AI and chips but also for keeping old leaders alive longer.; Jane and I check in on the debate over social media speech suppression, including the latest developments in the Facebook Oversight Board and the unusual bedfellows that the issue has inspired. I mock Google for YouTube's noblesse oblige promise that it will stop suppressing President Trump's speech when it no longer sees a threat of violence on the Right. And then I mock it again for its silly refusal to return search results for "BlueAnon"—the Right's label for the Left's wackier conspiracy theories.; In quick hits, Bruce and Dmitri explore a recent Atlantic Council report on hacked access as a service and what to do about it. Bruce thinks the problem (usually associated with NSO) is real and the report's recommendations plausible. Dmitri points out that trying to stamp out a trade in zero days is looking at the wrong part of the problem, since reverse engineering patches is the source of most successful attacks, not zero days. Speaking of NSO, Nick reminds us of the rumors that they have been under criminal investigation and that the investigation has been revived recently.; Jane notes that Virginia has become the second state with a consumer data protection law, and one that resembles California's CCPA.; Jane also notes the Israeli Supreme Court decision ending (sort of) Shin Bet's use cellphone data for coronavirus contact tracing. Ironically, it turns out to have been more effective than most implementations of the Gapple privacy-crippled app.; Bruce and Dmitri celebrate the hacking of three Russian cybercrime forums for the rich array of identity clues the doxxing is likely to make available to researchers like Bellingcat (whose founder will be our interview guest on Episode 353 of the Cyberlaw Podcast. And More! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.ListenMar 8, 202172 minWould AI benefit from some strategic decoupling
40
Decrypted UnscriptedDominique Shelton LeipzigDavid BidermanShowLinkAI and Bias—Methods of Math Destruction, Social Dilemma, and Beyond - Episode 13Hear The Social Dilemma star and former corporate data scientist, Cathy O'Neil, speak with Dominique and David about why society needs to look behind the numbers with AI! Check out Cathy's book, Weapons of Math Destruction, in the show notes!ListenMar 2, 202126 minCathy O'NeilAI and Bias - Cathy O'Neil
41
Machine EthicsBen ByfordShowLink52. Algorithmic discrimination with Damien WilliamsThis episode we chat with Damien Williams about types of human and algorithmic discrimination, human-technology expectations and norms, algorithms and benefit services, the contextual nature of sample data, is face recognition even a good idea? Should we be scared that GTP-3 will take our jobs and the cultural value of jobs, encoding values into autonomous beings, culture and mothering AI, AI and dogma, and more...
ListenMar 1, 202157 minDamien WilliamsAlgo discrim, culture & mothering AI, AI & dogma
42
Cyberwire DailyShowLinkAarti Borkar: Make your own choices. [Product} [Career Notes]Head of Product for IBM Security Aarti Borkar shares her journey which included going after her lifelong love of math rather than following in her parents' footsteps in the medical field. In following her passions, Aarti found herself studying computer engineering and computer science, and upon taking a pause from her studies, she found a niche working at IBM in a mix of databases and networking. In her current position, Aarti describes her favorite discussion topics very often involve being around the use of AI for converting security into predictive domains. Aarti reminds us that you should pause and see if you are on the right path. Staying on a path just because you started there can be a bad idea. And, we thank Aarti for sharing her story.ListenFeb 27, 20217 minAarti BorkarUse of AI for converting security into predictive domains
43
Privacy News OnlinePrivacy News Online News: border phone searches, Spotify audio spying patent, privacy bills and morePrivacy news from Private Internet Access VPN: https://privateinternetaccess.com/PNO30
For A Limited Time Only: Sign up now and 2 months FREE!

To read more about the privacy news stories in the video, please go to:
https://www.privateinternetaccess.com/blog/privacy-news-online-weekly-review-february-19-2021/

Cybersecurity News with Josh Long is sponsored by Intego. Save on Intego’s world-class protection software for Mac/Windows:
https://www.intego.com/lp/route-privacynews/?channel=privacynews

Privacy news stories in this episode:
• The government can search your phone at the border without a warrant
• New Spotify patent would use mic to infer emotional state, age, gender, and accent
• Code is law: why software openness and algorithmic transparency are vital for privacy
• State Privacy Bills Reemerge as Momentum Grows Nationwide
• Singapore passes law to force all students to install spyware
• Two Android spyware families have been discovered with ties to the Confucius ABT group
ListenFeb 23, 20214 minWhy software openness, algo transparency vital for privacy
44
Privacy News OnlinePrivacy News Online: Google identifies protesters, Clearview is illegal in CA & Plex amplifies DDOSPrivacy news from Private Internet Access VPN: https://privateinternetaccess.com/PNO29
For A Limited Time Only: Sign up now and 2 months FREE!

To read more about the privacy news stories in the video, please go to:
https://www.privateinternetaccess.com/blog/privacy-news-online-weekly-review-february-12-2021/

Cybersecurity News with Josh Long is sponsored by Intego. Save on Intego’s world-class protection software for Mac/Windows:
https://www.intego.com/lp/route-privacynews/?channel=privacynews

Privacy news stories in this episode:
• Police served warrant on Google to identify George Floyd protesters
• Users have privacy concerns about Microsoft’s inclusion in Raspberry Pi OS
• Time to get rid of pervasive online ad tracking once and for all: the alternative is simple, effective, and fully respects privacy
• South Africa’s highest court bans bulk internet surveillance
• Clearview AI ruled ‘illegal’ by Canadian privacy authorities
• Plex media server, a popular app for streaming video libraries, is being abused to amplify distributed denial of service attacks
• A fake version of WhatsApp for iOS has been observed in the wild
ListenFeb 23, 20214 minClearview AI: ‘illegal’ says Canadian privacy auths
45
Privacy News OnlinePrivacy News Online: Police use car data, UK stores use facial recognition, Zyxel backdoored & morePrivacy news from Private Internet Access VPN

To read more about the privacy news stories in the video, please go to:
https://www.privateinternetaccess.com/blog/privacy-news-online-weekly-review-january-8-2021/

Cybersecurity News with Josh Long is sponsored by Intego. Save on Intego’s world-class protection software for Mac/Windows:
https://www.intego.com/lp/route-privacynews/?channel=privacynews

Learn more about Micah Lee:
https://theintercept.com/staff/micah-lee/

Privacy news stories in this episode:
• Police are increasingly using digital vehicle forensics to solve cases
• Man sues police after incorrect facial recognition match leads to wrongful arrest
• Bill and Melinda Gates Foundation backed project suffers data breach, 930,000 children affected
• Neopets Is Still A Thing And Its Exposing Sensitive Data
• Some UK Stores Are Using Facial Recognition to Track Shoppers
• A backdoor account has been discovered in more than 100,000 Zyxel firewalls, VPN gateways, and wireless access point controllers
ListenFeb 23, 20215 minUK stores use facial recog
46
PrivacyCastAkarsh SinghShowLinkDriverless Cars & Data Privacy with Saiman Shetty, Technical Program Manager, Nuro💡 Self-driving cars, robotics, artificial intelligence - Nuro is utilizing the technology to transform the future.

👉 But, how are these companies operating driverless cars processing the user data? Do we need to worry about the personal data collected and utilized by these #selfdriving cars? Who owns the data being generated by the car? Go listen to this and get your answers.

Lets unlock these mysteries with our guest #PrivacyWarrior ☀️ Saiman.
ListenFeb 14, 202133 minSaiman Shetty, NuroSelf-driving cars, robotics, artificial intelligence
47
CaveatDave BittnerBen YelinShowLinkCovid's effects on medical privacy.We have guest Jenna Waters from True Digital Security looking back at the last year of Covid and how that’s affected privacy, particularly in the medical field, Ben looks at a tool that can help determine if your image is part of a facial recognition library, and Dave has the story of law enforcement dodging public records rules through the use of encrypted messaging apps.
While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. 
Links to stories:

Here’s a Way to Learn if Facial Recognition Systems Used Your Photos

Michigan State Police Officials Are Dodging Public Records Obligations By Using Encrypted Messaging Apps

Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com or simply leave us a message at (410) 618-3720. Hope to hear from you.
ListenFeb 10, 202138 minJenna Waters, True DigitalIs your image is part of a facial recog library
48
Data Protection GumboDemetrius MalbroughShowLink080: Season 4 - Episode 6: Caitlin Halferty, Director, AI Accelerator and Client Success - 2021: The Year Digital Transformation Goes Viral - DP GumboCaitlin Halferty, Director, AI Accelerator and Client Success at IBM Global Chief Data Office discusses the differences between DataOps and DevOps, details on how artificial intelligence is a key component to gathering analytics, and her view of the Data Protection industry from the lens of Digital Transformation.ListenFeb 9, 202133 minCaitlin Halferty, IBM Caitlin Halferty, AI Accelerator at IBM Global
49
Cyberlaw PodcastStewart BakerDavid KrisShowLinkEpisode 348: Well, Have You Ever Seen Dr. Octopus and Sen. Klobuchar Together?This episode features a deep dive into the National Security Agency's self-regulatory approach to overseas signals intelligence, or SIGINT. Frequent contributor David Kris takes us into the details of the SIGINT Annex that governs NSA's collections outside the US. It turns out to be a surprising amount of fun as we stop to examine the SIGINT turf wars of the 40s, the intelligence scandals of the 70s, and how they shaped NSA's corporate culture.; In the news roundup, Bruce Schneier and I review the Privacy Commissioner's determination that Clearview AI violated Canadian privacy law by scraping Canadians' photos from social media.; Bruce thinks Clearview had it coming; I'm skeptical, since it appears that pretty much everyone has been scraping public face data for their machine learning collections for years.; David Kris explains why a sleepy investment review committee with practically no staff is now being compared to a SWAT team. The short answer is "CFIUS."; More and more, Gus Hurwitz and I note, Big Tech CEOs are being treated in Washington like comic book supervillains. But have they met their match? Sen. Amy Klobuchar is clearly campaigning to be, if not Attorney General, then their nemesis. Like Doc Ock, she's throwing punch after punch at Big Tech, not just in antitrust legislation but Section 230 reform as well.; We're not done with Solar Winds yet, and Bruce Schneier thinks that’s fair. He critiques the company for milking profits from its software niche without reinvesting in security.; Gus revives the theme of Big Tech at bay, noting that Australia may start charging Google when it links to Australian news stories noting that Australia may start charging Google when it links to Australian news stories and that the new administration seems quite willing to join the rest of the world in imposing more taxes more taxes on tech profits.; David covers the flap between India and Twitter, which is refusing to follow an Indian order to suppress several Twitter accounts. That's probably, I suggest, because there is insufficient proof that the accounts in question belong to Republicans.; IBM seems to be bailing on blockchain, and Bruce thinks it's about time. In some ways, IBM is the most interesting of tech companies, since it has less of a moat around its business than most and must live by its wits, which are formidable. Bruce offers quantum computing as an example of IBM doing the right things well.; Bruce and Gus help me with a preview of an upcoming interview of Nicole Perlroth as we cover an op-ed pulled from her new book. Bruce also offers a quick assessment of the draft report of the National Security Commission on Artificial Intelligence The short version: there isn’t enough there there.; Finally, Gus reminds us that a prophet who predicts the attention economy but then refuses to play by its rules is almost guaranteed to end up as an attention Cassandra, as Michael Goldhaber has.; And more. The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.ListenFeb 8, 202178 minBruce SchneierGus HurwitzPrivacy Commiss says Clearview AI violated law
50
Data Diva Talks PrivacyDebbie ReynoldsShowLinkE13 - Gail Gottehrer Law Firm Emerging TechnologiesDebbie Reynolds "The Data Diva," talks to Gail Gottehrer of the Law Firm,
Gail Gottehrer LLC, who counsels clients on Emerging Technologies like
autonomous vehicles, robots, IoT, etc. We discuss her career transition to
emerging technologies, diversity and data privacy, the New York State Bar
Association's technology committee work, an explanation of Quantum
Computing, Post-Quantum Cryptography, the need for understanding
Cybersecurity, data minimization to reduce risk, AI and Data Privacy,
privacy and wearable technology, understanding consent with technology use,
video interviews, and AI, autonomous vehicles, and her wish for data
privacy in the future.
ListenFeb 1, 202143 minGail GottehrerAI and Data Privacy, video interviews and AI
51
The ID Talk PodcastShowLinkYear in Review: Jumio's Dean Nicolls on Fighting Fraud and Racial Bias with BiometricsOn this special Year in Review episode of ID Talk, FindBiometrics and Mobile ID World founder Peter O’Neill speaks with Dean Nicolls, Vice President of Global Marketing for Jumio.

The conversation begins on the theme of user experience and the rise of fraud, with Nicolls sharing some fascinating findings from Jumio’s Holiday Fraud Report. The conversation goes on to delve into topics around demographic bias in biometrics, current issues concerning privacy, and the importance of liveness detection, before taking a look ahead to what’s next in biometrics and digital onboarding.

Learn more about the topics in this episode by visiting http://jumio.com
ListenJan 29, 202130 minDean Nicolls, JumioRacial Bias with Biometrics
52
CaveatDave BittnerBen YelinShowLinkThe intersection of law, technology and risk.On this week’s show Dave speaks with guest Andrew Burt from the Yale Information Society Project (ISP) on Digital Future Whitepaper Series first whitepaper, "Nowhere to Hide: Data, Cyberspace, and the Dangers of the Digital World," Ben looks at the shift to more secure messaging apps in the fallout of Parler going offline, and Dave has the story of the FTC cracking down on misuse of facial recognition software. 
While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. 
Links to stories:

Far-right groups move online conversations from social media to chat apps — and out of view of law enforcement

FTC Requires App Developer to Obtain Users’ Express Consent for Use of Facial Recognition

Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com or simply leave us a message at (410) 618-3720. Hope to hear from you.
ListenJan 27, 202143 minAndrew Burt, Yale Information Society ProjectFTC cracking down on misuse of facial recog software
53
Data Diva Talks PrivacyDebbie ReynoldsShowLinkE12 - Zaheer Allam, Ph.D. Smart Cities, Sustainable Futures Urban Strategist, AuthorDebbie Reynolds "The Data Diva," talks to Zaheer Allam, Ph.D., an expert on
Smart Cities. He is also an Urban Strategist and an advocate for
Sustainable Futures. We discuss the definition of smart cities, the thread
of technology through smart cities, citizens needs in different
communities, safe cities use of surveillance, sharing data for research and
development, use of synthetic data, ethics of data collection, smart
cities, and AI bias, Western Culture and technology development, scale and
its impact on smart cities, interoperability of data exchange and
innovation, proximity-based data solutions, data rebates and other ways to
share data in appropriate ways and his wishes for data privacy in the
future.
ListenJan 25, 202139 minZaheer AllamAI bias
54
Voices of the Data EconomyNiMA AsghariDiksha DuttaShowLinkSafiya Noble: How Search Engines use our Data against usIn the tenth episode of Voices of the Data Economy, we had a conversation with Dr. Safiya Umoja Noble, Author of Algorithms of Oppression and Associate Professor at the University of California, Los Angeles (UCLA) in the Department of Information Studies. During this discussion, she spoke about how search engines like Google reinforce discrimination, the role of government regulations in protecting data, and why big corporates are now talking about data protection rights. 

Voices of Data Economy is supported by Ocean Protocol Foundation. Ocean is kickstarting a Data Economy by breaking down data silos and equalizing access to data for all. This episode was hosted by Diksha Dutta, audio engineering by Aneesh Arora. --- Send in a voice message: https://anchor.fm/dataeconomy/message
ListenJan 22, 20211 hr 6 minSafiya Noble, UCLAAlgorithms of Oppression
55
Machine EthicsBen ByfordShowLink51. AGI Safety and Alignment with Robert MilesThis episode we're chatting with Robert Miles about why we even want artificial general intelligence, general AI as narrow AI where its input is the world, when predictions of AI sound like science fiction, covering terms like: AI safety, the control problem, Ai alignment, specification problem; the lack of people working in AI alignment, AGI doesn’t need to be conscious, and moreListenJan 13, 202156 minRobert MilesGeneral AI, AI safety
56
CaveatDave BittnerBen YelinShowLinkDiversity has to be part of the mission in cybersecurity.On this week’s show, we've got Dave's conversation with David Forscey with the Aspen Institute on their new Cybersecurity Collaborative Network, Ben shares why we were late getting started recording Caveat for 2021, Ben's story covers the ongoing issues with facial recognition software, and Dave has the story of California upholding restrictions on Stingrays.
While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. 
Links to stories:

Unregulated facial recognition must stop before more Black men are wrongfully arrested

Study: AVs May Not Detect Darker-Skinned Pedestrians As Often As Lighter Ones (related story)

Court Upholds Legal Challenge Under California Statewide Stingray Law

Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com or simply leave us a message at (410) 618-3720. Hope to hear from you.
ListenJan 13, 202144 minDavid Forscey, Aspen InstituteOngoing issues with facial recog software
57
Data Diva Talks PrivacyDebbie ReynoldsShowLinkE10 - David Goodis Privacy Commissioner CanadaDebbie Reynolds "The Data Diva," talks to David Goodis the Privacy
Commissioner of Ontario, Canada. We discuss his career transition into Data
Privacy as a regulator, Data Privacy in Canada at present, a background of
Canadian Data Privacy regulations over the last 20 years, current proposals
for changes in Canadian Data Privacy regulations PIPEDA and frameworks, the
use of AI and automated decision-making, socially acceptable beneficial
purposes of data use, the need for transparency, the trust problem, onward
data transfer, differences between Canada the U.S. and the EU and privacy
legislation, commerce and the role of the FTC and future US data privacy
laws, the adequacy question of the EU and Canada, and his wish for privacy
enforcement in the future.
ListenJan 11, 202139 minDavid GoodisUse of AI and automated decision-making
58
The New OilShowLinkJanuary 3, 2021Happy New Year! We made it! This week: the return of data breaches, censorship, and facial recognition gone wrong. Full show notes: https://thenewoil.xyz/20210103.htmlListenJan 3, 202130 minFacial recog gone wrong
59
FIT4PRIVACYPunit BhatiaShowLink021 FIT4PRIVACY Podcast 🎙️ with Raghavan Chellappan (Full Episode) - Privacy mattersIn this episode of The FIT4PRIVACY episode, Punit Bhatia has a conversation with Raghavan Chellappan to discuss some of the ramifications and the need for systemic change to better manage privacy, compliance and security in the digital age.

The conversation highlights include:

💡 the risks for companies in relation to privacy and security compliance
💡 constraints faced by businesses while embracing emerging technologies (AI/ML/IoT)
💡 recommendations for small companies as they cannot afford a full-time privacy personnel.

Raghavan Chellappan is a Management Consultant and Co-founder & CTO of data privacy and security services tech startup “BYTESAFE” – which provides next-gen privacy and compliance management (PCM) solutions powered by transformational AI/ML technologies. Raghav has over 20 years of experience and has played key roles delivering digital transformation solutions implementing emerging technology (AI/ML/IoT) across industries as an employee/consultant (including  for Big 4, small business, startups) helping fortune 500 companies, government agencies, life sciences, pharmaceuticals, healthcare, telecom, finance, retail, social media and nonprofit organizations. He is passionate about all things privacy and an avid believer in the “servant leader” philosophy.  He works with businesses and executive teams across industries to help them with delivery focused outcomes.

Listen to this conversation and share your comments on what you think. You can subscribe to FIT4PRIVACY podcast so that you are notified about new episodes. --- Send in a voice message: https://anchor.fm/fit4privacy/message
ListenDec 30, 202044 minRaghavan ChellappanConstraints faced by businesses embracing AI/ML
60
Machine EthicsBen ByfordShowLink49. 2020 rambling chat with Ben Gilburt and Ben ByfordThis episode Ben and Ben are chatting about 2020 - Timnit Gebru leaving google, the promise of AI and COVID-19, Kaggle's COVID competition, GTP3, test and trace apps and privacy, AI Ethics bookclub, AI ethics courses, when transparency is good or bad, alpha fold, and more...ListenDec 30, 202067 minBen GilburtPromise of AI and COVID-19, GTP3, AI ethics courses
61
Cyberwire DailyShowLinkBear tracks all over the US Government’s networks. Pandas and Kittens and Bears, oh my... Emotet’s back. Spyware litigation. A few predictions.The US continues to count the cost of the SVR’s successful cyberespionage campaign. Attribution, and why it’s the TTPs and not the org chart that matters. Emotet makes an unhappy holiday return. It seems unlikely that NSA and US Cyber Command will be separated in the immediate future. Big Tech objects, in court, to NSO Group and its Pegasus spyware (or lawful intercept product, depending on whether you’re in the plaintiff’s or the respondent’s corner). Ben Yelin looks at hyper realistic masks designed to thwart facial recognition software. Our guest Neal Dennis from Cyware wonders if there really isn't a cybersecurity skills gap. And a quick look at some more predictions.
For links to all of today's stories check out our CyberWire daily news brief:
https://www.thecyberwire.com/newsletters/daily-briefing/9/245
ListenDec 22, 202027 minNeal Dennis, CywareHyper realistic masks designed to thwart facial recog
62
Data Diva Talks PrivacyDebbie ReynoldsShowLinkE7 - Rohan Light of DecisivDebbie Reynolds, "The Data Diva," talks to Rohan Light, CEO of Decisive, AI
auditor, Data Ethicist, Humane Data Frameworks creator. We discuss data
privacy professionals asking the right questions of businesses, the need
for better metaphors and storytelling to explain complex data topics, data
commoditization and the human elements of data ownership, the synchronicity
of ethics and laws related to data privacy, frameworks related to the
management of data privacy, data as a measure of human activity, bias in
AI, algorithms, data gathering and analysis, the three thresholds needed to
analyze data change, quantum computing and its impact on encryption, data
professional as mediators of different views of the world, learning through
good data science, the dynamics of biometrics stewardship, a change in
FIP(Fair Information Practices) for data privacy purposes, and blockchain
application in trust-based governance,
ListenDec 22, 202037 minRohan LightRohan Light
63
CaveatDave BittnerBen YelinShowLinkWe don't know what we need, until we do with facial recognition.Ben looks at potential antitrust suits against Facebook, Dave looks at anti-hate speech laws in Germany that are serving as models for authoritarians around the world, our international LOTL asks about what the privacy of instant messaging platforms looks like in the US, and later in the show, Dave's conversation with Jennifer Strong, host of the new MIT Technology Review podcast, "In Machines We Trust."
While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. 
Links to stories:

State, federal authorities expected to file antitrust lawsuits against Facebook on Wednesday

Germany’s Online Crackdowns Inspire the World’s Dictators

Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com or simply leave us a message at (410) 618-3720. Hope to hear from you.
ListenDec 16, 202050 minJennifer StrongIn Machines We Trust
64
Cyberlaw PodcastStewart BakerShowLinkEpisode 342: Could European Privacy Law Protect American Child Molesters?The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.Our interview is with Alex Stamos, who lays out a complex debate over child sexual abuse that’s now roiling Brussels. The application of European privacy standards and AI hostility to internet communications providers has called into question the one tool that has reduced online child sex predation. Scanning for sex abuse images works well, and even scanning for signs of “grooming” is surprisingly effective. But they depend on automated monitoring of communications content, something that has come as a surprise to European lawmakers hoping to impose more regulation on American tech platforms. Left unchanged, the new European rules could make it easier to abuse American kids. Alex explains the rushed effort to head off that disaster – and tells us what Ashton Kutcher has to do with it (a lot, it turns out).; Meanwhile, in the news roundup, Michael Weiner breaks down the FTC's (and the states’) long-awaited antitrust lawsuit against Facebook. Maybe the government will come up with something as the case moves forward, but its monopolization claims don’t strike me as overwhelming. And, Mark MacCarthy points out, the likelihood that the lawsuit will do something good on the privacy front is vanishingly small.; Russia’s SVR, heir of the KGB, is making headlines with a remarkably sophisticated and well-hidden cyberespionage attack cyberespionage attack on a lot of institutions that we hoped were better at defense than they turned out to be. Nick Weaver lays out the depressing story, and Alex offers a former CISO’s perspective, arguing for a federal breach notification law that goes well beyond personal data and includes disciplined after-action reports that aren’t locked up in post-litigation gag orders. Jamil Jaffer tells us that won’t happen in Congress any time soon.; Jamil also comments on the prospects for the National Defense Authorization Act, chock full of cyber provisions and struggling forward under a veto threat. If you’re not watching the European Parliament tie itself in knots trying to avoid helping child predators, tune in to watch American legislators tie themselves into knots tie themselves into knots trying to pass an important defense bill without drawing the ire of the President.; The FCC, in an Ajit Pai farewell, has been hammering Chinese telecoms companies. In one week, Jamil reports, the FCC launched proceedings to kick China Telecom out of the US infrastructure, reaffirmed its exclusion of Huawei from the same infrastructure, and adopted a “rip and replace” mandate for US providers who still have Chinese gear Chinese gear in their networks.; Nick and I clash over the latest move by Apple and Google to show their contempt for US counterterrorism efforts – the banning of a location data company whose real crime was selling the data to (gasp!) the Pentagon.; Mark explains the proposals for elaborate new regulation elaborate new regulation of digital intermediaries now working their way through -- where else? – Brussels. I offer some cautious interest in regulation of “gatekeeper” platforms, if only to prevent Brussels and the gatekeepers from combining to slam the Overton window on conservatives’ fingers.; Mark also reports on the Trump administrations principles for US government use of artificial intelligence government use of artificial intelligence, squelching as premature my celebration at the absence of “fairness” and “bias” cant.; Those who listen to the roundup for the porn news won’t be disappointed, as Mark and I dig into the details of Pornhub’s brush with cancellation at the hands of Visa and Mastercard – and how the site might overcome the attack.; In short hits, Nick and I disagree about Timnit Gebru, the “ethicist” who was let go at Google at Google after threatening to quit and who now is crying racism. I report on the enactment of a modest enactment of a modest but useful IoT Cybersecurity law and on the doxxing of the Chinese Communist Party membership rolls as well as the adoption of the most law-enforcement-hostile technology yet to come out of Big Tech – Amazon’s Sidewalk.; And More! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.ListenDec 15, 202072 minAI and online child sex predation
65
CaveatDave BittnerBen YelinShowLinkA pandemic is a perfect environment for disinformation to thrive.Ben looks at the US Supreme Court’s potential interpretations of the computer fraud and abuse act, Dave's got a report out of the UK on AI algorithm oversight, and later in the show, Dave's conversation with Nina Jankowicz, Disinformation Fellow from the Wilson Center on her book: How to Lose the Information War: Russia, Fake News, and the Future of Conflict.
While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. 
Links to stories:

Argument analysis: Justices seem wary of breadth of federal computer fraud statute

The algorithms are watching us, but who is watching the algorithms?

Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com or simply leave us a message at (410) 618-3720. Hope to hear from you.
ListenDec 9, 202045 minNina Jankowicz, Wilson CenterUK on AI algorithm oversight
66
Data protection and ICTShowLinkBig Data analytics and AI in personalised insurance, a data protection perspective [with Gert Meyers]To what extend insurance companies rely on Big Data analytics and AI for pricing and costing of their policies? What implications does this have from a data protection point of view? A chat with Gert Meyers, researcher at Tilburg University and KU Leuven.ListenDec 9, 202031 minGert Meyers, Tilburg UniversityInsurance, Big Data analytics, and AI for pricing
67
Data Diva Talks PrivacyDebbie ReynoldsShowLinkE5 - Dawid Jacobs of DAL-Global IncDebbie Reynolds "The Data Diva," talks to Dawid Jacobs, CEO of Diverse
Authentication Library DAL-Global Inc, a Global Identity Protection and
Authentication. We discuss evidence-based identity authentication, the
differences between identity management and access management, what is a
digital twin and self-sovereign identity, the problem of financial fraud
with fake identities, identity concerns with voting, Identity theft, the
bias in facial recognition biometrics identity systems, deep fakes, privacy
requirements in the EU (GDPR) and South Africa (POPI) related to
authentication of individuals, credit bureaus data collection and data
sale, the need to have individuals own their identities, and his wish for
data privacy regulation globally.
ListenDec 8, 202037 minDawid Jacobsbias in facial recog biometrics identity systems
68
Cyberlaw PodcastStewart BakerShowLinkEpisode 341: It's Time to Pay Attention When Attention Stops PayingDid you ever wonder where all that tech money came from all of a sudden? Turns out, a lot of it comes from online programmatic ads, an industry that gets little attention even from the companies, such as Google, that it made wealthy. That lack of attention is pretty ironic, because lack of attention is what's going to kill the industry, according to Tim Hwang, former Google policy maven and current research fellow at the Center for Security and Emerging Technology (CSET).; In our interview, Tim Hwang explains the remarkably complex industry and the dynamics that are gradually leaching the value out of its value proposition. Tim thinks we're in an attention bubble, and the popping will be messy. I'm persuaded the bubble is here, but not that its end will be disastrous outside of Silicon Valley.; Sultan Meghji and I celebrate was seems like excellent news about a practical AI achievement in predicting protein folding. It's a big deal, and an ideal problem for AI, with one exception. The parts of the problem that AI hasn't solved would be a lot easier for humans to work on if AI could tell us how it solved the parts it did figure out. Explainability, it turns out, is the key to collaborative AI-human work.; We welcome first time participant and long-time listener Jordan Schneider to the panel. Jordan is the host of the unmissable ChinaTalk podcast. Given his expertise, we naturally ask him about … Australia. Actually, it's a natural, because Australia is now the testing ground for many of China's efforts to exercise power over independent countries using cyber power along with trade. Among the highlights: Chinese tweets highlighting a report about Australian war crimes followed by hamhanded tweet-boosting bot campaigns. And in a move that ought to be featured in future justifications of the Trump administration's ban on WeChat, the platform refused to carry the Australian prime minister's criticism of the war-crimes tweet. Ted Cruz, call your office!; And this will have to be Sen. Cruz's fight, because it looks more and more as though the Trump administration has thrown in the towel. Its claim to be negotiating a TikTok sale after ordering divestment is getting thinner; now the divestment deadline has completely disappeared, as the government simply says that negotiations continue. Nick Weaver is on track to win his bet with me that CFIUS won't make good on its order before the mess is shoveled onto Joe Biden's plate.; Whoever was in charge of beating up WeChat and TikTok may have left government early, but the team that's sticking pins in other Chinese companies is still hard at work. Jordan and Brian talk about the addition of SMIC to the amorphous Defense blacklist. And Congress has passed a law (awaiting Presidential signature) that will make life hard for Chinese firms listed on US exchanges.; China, meanwhile, isn't taking this lying down, Jordan reports. It is mirror-imaging all the Western laws that it sees as targeting China, including bans on exports of Chinese products and technology. It is racing (on what Jordan thinks is a twenty-year pace) to create its own chip design capabilities. And with some success. Sultan, newly dubbed the podcast's DeHyper, takes some of the hype out of China's claims to quantum supremacy. Though even dehyped, China's achievement should be making those who rely on RSA-style crypto just a bit nervous (that's all of us, by the way).; Michael Weiner previews the still veiled state antitrust lawsuit against Facebook and promises to come back with details as soon as it's filed.; In quick hits, I explain why we haven't covered the Iranian claim that their scientist was rubbed out by an Israeli killer robot machine gun: I don't actually believe them. Brian explains that another law aimed at China and its use of Xinjian forced labor is attracting lobbyists but likely to pass. Apple, Nike, and Coca-Cola have all taken hits for lobbying on the bill; none of them say they oppose the bill, but it turns out there's a reason for that. Lobbyists have largely picked the bones clean.; President Trump is leaving office in typical fashion – gesturing in the right direction but uninteresting in actually getting there. In a "Too Much Too Late" negotiating move, the President has threatened to veto the defense authorization act if it doesn't include a repeal of section 230 of the Communications Decency Act. If he's yearning to wield the veto, Dems and GOP alike seem willing to give him the chance. They may even override, or wait until January 20 to pass it again.; Finally, I commend to interested listeners the oral argument in the Supreme Court's Van Buren case, about the Computer Fraud and Abuse Act. The Solicitor General's footwork in making up quasitextual limitations on the more sweeping readings of the Act is admirable, and it may well be enough to keep van Buren in jail, where he probably belongs for some crime, if not this one.; And more.ListenDec 7, 20201 hr 6 minAI Explainability problems
69
Cyberwire DailyShowLink
Ron Brash: Problem fixer in critical infrastructure. [OT] [Career Notes]
Director of Cyber Security Insights at Verve Industrial aka self-proclaimed industrial cybersecurity geek Ron Brash shares his journey through the industrial cybersecurity space. From taking his parents 286s and 386s to task to working for the "OG of industrial cybersecurity," Ron has pushed limits. Starting off in technical testing, racing through university at 2x speed, and taking a detour through neuroscience with machine learning, Ron decided to return to critical infrastructure working with devices that keep the lights on and the water flowing. Ron hopes his work makes an impact and his life is memorable for those he cares about. We thank Ron for sharing his story with us.
ListenDec 6, 20208 minNeuroscience with machine learning
70
Privacy AdvisorJedidiah BracyShowLinkThe Privacy Advisor Podcast: Carissa Véliz on privacy, AI ethics and democracyArtificial intelligence, big data and personalization are driving a new era of products and services, but this paradigm shift brings with it a slate of thorny privacy and data protection issues. Ubiquitous data collection, social networks, personalized ads and biometric systems engender massive societal effects that alter individual self-determination, fracture shared reality and even sway democratic elections. As an associate professor at the University of Oxford's Faculty of Philosophy and the Institute for Ethics in AI, Carissa Véliz has immersed herself in these issues and recently wrote a book, "Privacy Is Power: Why and How You Should Take Back Control of Your Data." In this latest Privacy Advisor Podcast, host Jedidiah Bracy, CIPP, caught up with Véliz to discuss her book and the importance privacy plays in society.ListenDec 4, 202057 minCarissa VélizAI ethics
71
Law and CandorBill MarianoRob HellewellShowLinkAI, Analytics, and the Benefits of TransparencyIn the final episode of season six, co-hosts Bill Mariano and Rob Hellewell review an article covering key privacy and security features on iOS4 and highlight the top features to be aware of.

The co-hosts then bring on Forbes Senior Contributor, David Teich, to discuss AI, analytics, and the benefits of transparency via the following questions:

* Why is it important to be transparent in the legal realm?
* How does this come into play with bias?
* What about AI and jury selection?
* How do analytics come into play as a result of providing transparency?

The season ends with key takeaways from the guest speaker section. Subscribe to the show here, rate us on Apple and Stitcher, connect with us on Twitter, and discover more about our speakers and the show here.

Related Links

* Blog Post: Big Data and Analytics in eDiscovery: Unlock the Value of Your Data
* Blog Post: The Sinister Six…Challenges of Working with Large Data Sets
* Blog Post: Advanced Analytics – The Key to Mitigating Big Data Risks
* Podcast Episode: Tackling Big Data Challenges
* Podcast Episode: The Future is Now – AI and Analytics are Here to Stay

About Law & Candor

Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click here.
ListenDec 3, 202024 minDavid Teich, Forbes Senior ContributorAI and jury selection
72
Serious PrivacyPaul BreitbarthK RoyalShowLinkRegTech: Using the Power of Technology for Good (with Shub Nandi)Technology brings new demands for compliance, especially given the amount of personal data collected through various means and how it is both used and combined. However, technology can also be used to assist compliance professionals by providing the necessary information quickly.  In most of the Serious Privacy  episodes, co-hosts Paul Breibarth and K Royal have discussed one or more specific data protection and privacy related topics. With guest Shub Nandi, the CEO and Founder of PiChain, a company based in Bangalore, India, that primarily focuses on the financial sector, they look at the broader scope of RegTech (regulatory technology) and how it helps to support compliance.

PiChain tries to simplify all kinds of business processes, from customer and business onboarding and Know-Your-Customer to e-Contracts. These are all topics that privacy professionals would address, but typically would not have the information available to assess risks accurately or timely. With AI, one can leverage the power of technology to simplify the process, reserving valuable time for decision-making and remediation plans rather than collecting the information.

Join us as we discuss the financial market, regulatory challenges, and key technology solutions, such as blockchain. The conversation expands to include the European Union’s General Data Protection Regulation, India’s challenges including its privacy laws, and ethics in data science. 

S*ocial Media
*@Podcastprivacy, @heartofprivacy, @euroPaulB, @trustArc
ListenDec 3, 202038 minShub Nandi, PiChainUsing AI for Compliance
73
Law and CandorBill MarianoRob HellewellShowLinkTackling Modern Attachment and Link Challenges in G-Suite, Slack, and TeamsIn the fourth episode of the sixth season, co-hosts Bill Mariano and Rob Hellewell discuss how GDPR and AI can ensure data protection during their sightings segment.

Next, they introduce their guest speaker, Nick Schreiner of Lighthouse, who uncovers key ways to tackle modern attachment and link challenges in G-Suite, Teams, and Slack. Nick answers the following questions (and more) in this episode:

* What are the common challenges around modern attachments and links in ediscovery?
* How do attachments in these tools differ from traditional email?
* What strategies can folks put in place to manage these challenges?

Our co-hosts wrap up the episode with a few key takeaways. If you enjoyed the show, subscribe here, rate us on Apple and Stitcher, join in the conversation on Twitter, and discover more about our speakers and the show here.

Related Links

* Podcast Episode: Emerging Data Sources – Get a Handle on eDiscovery for Collaboration Tools
* Blog Post: Three Key Tips to Keep in Mind When Leveraging Corporate G Suite for eDiscovery

About Law & Candor

Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click here.
ListenDec 3, 202022 minNick Schreiner, LighthouseHow GDPR and AI can ensure data protection
74
Law and CandorBill MarianoRob HellewellShowLinkThe Convergence of AI and Data Privacy in eDiscovery: Using AI and Analytics to Identify Personal InformationLaw & Candor co-hosts Bill Mariano and Rob Hellewell kick things off with Sightings of Radical Brilliance, in which they discuss the challenges and implications of misinformation around voting in the U.S.

In this episode, Bill and Rob are joined by John Del Piero of Lighthouse. The three of them discuss how PII and PHI can be identified more efficiently by leveraging tools like AI and analytics via the following questions:

* Why is it important to identify PII and PHI within larger volumes of data quickly?
* How can AI and analytics help to identify PII and PHI more efficiently?
* What are the key benefits of using these tools?
* Are there any best practices to put in place for those looking to weave AI and analytics into their workflow?

In conclusion, our co-hosts end the episode with key takeaways. If you enjoyed the show, subscribe here, rate us on Apple and Stitcher, join in the conversation on Twitter, and discover more about our speakers and the show here.

Related Links

* Blog Post: Big Data and Analytics in eDiscovery: Unlock the Value of Your Data
* Blog Post: The Sinister Six…Challenges of Working with Large Data Sets
* Blog Post: Advanced Analytics – The Key to Mitigating Big Data Risks
* Podcast Episode: Tackling Big Data Challenges
* Podcast Episode: The Future is Now – AI and Analytics are Here to Stay

About Law & Candor

Law & Candor is a podcast wholly devoted to pursuing the legal technology revolution. Co-hosts Bill Mariano and Rob Hellewell explore the impacts and possibilities that new technology is creating by streamlining workflows for ediscovery, compliance, and information governance. To learn more about the show and our speakers, click here.
ListenDec 3, 202015 minJohn Del Piero, LighthouseHow PII/PHI can be identified more efficiently with AI
75
EFF's How to Fix the InternetCindy CohnDanny O’BrienShowLinkFrom Your Face to Their Database | 005Abi Hassen joins EFF hosts Cindy Cohn and Danny O’Brien as they discuss the rise of facial recognition technology, how this increasingly powerful identification tool is ending up in the hands of law enforcement, and what that means for the future of public protest and the right to assemble and associate in public places.

In this episode you’ll learn about:

* The Black Movement Law Project, which Abi co-founded, and how it has evolved over time to meet the needs of protesters;
* Why the presumption that people don’t have any right to privacy in public spaces is challenged by increasingly powerful identification technologies;
* Why we may need to think big when it comes to updating the U.S. law to protect privacy;
* How face recognition technology can have a chilling effect on public participation, even when the technology isn’t accurate;
* How face recognition technology is already leading to the wrongful arrest of innocent people, as seen in a recent case of a man in Detroit;
* How gang laws and anti-terrorism laws have been the foundation of a legal tools that can now be deployed against political activists;
* Understanding face recognition technology within the context of a range of powerful surveillance tools in the hands of law enforcement;
* How we can start to fix the problems caused by facial recognition through increased transparency, community control, and hard limits on law enforcement use of face recognition technology,
* How Abi sees the further goal is to move beyond restricting or regulating specific technologies to a world where public protests are not so necessary, as part of reimagining the role of law enforcement.

Abi is a political philosophy student, attorney, technologist, co-founder of the Black Movement-Law Project, a legal support rapid response group that grew out of the uprisings in Ferguson, Baltimore, and elsewhere. He is also a partner (currently on leave) at O’Neill and Hassen LLP, a law practice focused on indigent criminal defense. Prior to this current positions, he was the Mass Defense Coordinator at the National Lawyers Guild. Abi has also worked as a political campaign manager and strategist, union organizer, and community organizer. He conducts trainings, speaks, and writes on topics of race, technology, (in)justice, and the law. Abi is particularly interested in exploring the dynamic nature of institutions, political movements, and their interactions from the perspective of complex systems theory. You can find Abi on Twitter at @AbiHassen, and his website is https://AbiHassen.com

Please subscribe to How to Fix the Internet via RSS, Stitcher, TuneIn, Apple Podcasts, Google Podcasts, Spotify or your podcast player of choice. You can also find the Mp3 of this episode on the Internet Archive. If you have any feedback on this episode, please email podcast@eff.org.

You’ll find legal resources – including links to important cases, books, and briefs discussed in the podcast – as well a full transcript of the audio at https://www.eff.org/deeplinks/2020/11/podcast-episode-your-face-their-database.

Audio editing for this episode by Stuga Studios: https://www.stugastudios.com.

Music by Nat Keefe: https://natkeefe.com/

This work is licensed under a Creative Commons Attribution 4.0 International License.
ListenDec 1, 20201 hr 3 minAbi HassenRise of facial recog tech
76
Machine EthicsBen ByfordShowLink48. Jessie Smith co-designing AIThis episode we're chatting with Jess Smith about the Radical AI podcast and defining the word radical, what is AI - non-living ability to learn… maybe, AI consciousness, the responsibility of technologists, robot rights, what makes us human, creativity and more...ListenNov 25, 202050 minJessie SmithRadical AI podcast
77
Serious PrivacyPaul BreitbarthK RoyalShowLinkLive with Solutions that are Problems: IEEE ISTAS-PIT 2020Live! (okay recorded with a live studio audience) from our offices at a fabulous virtual conference - the 2020 IEEE International Symposium on Technology and Society, discussing Public Interest Technologies in a four day long global online conference. This is a first for Serious Privacy, recording an episode live as part of the conference presentation. In normal times, this conference would have taken place face-to-face, facilitating lots of participants to debate their papers with their peers in person. The Public Interest Technology University Network, comprising 36 institutes of higher education, is working to address these very issues by “building the nascent field of public interest technology and growing a new generation of civic-minded technologists.” 

Paul Breitbarth and K Royal host this live session and focus on the papers that are being presented and discussed on a wide range of topics. Arizona State University is one of the founding members of PIT-UN and is K’s alma mater and where she teaches privacy law. Quite a few of these issues addressed in the conference have also been addressed earlier in this first season of the Serious Privacy podcast: from Ethical AI concerns to COVID apps, and from Surveillance Societies to Mentoring the Next Generation of Technology Innovators.  

Join us as we discuss data monopolies, start-up tech companies, cultural norms, and more. Their host for the session, Salah Hamdoun also joined the conversation. Salah is a PhD student in Arizona but is from the Netherlands - which makes for a great discussion on the differences between the EU and the US approaches to privacy. It was the first time the Fourth Industrial Revolution has arisen in the episodes, but an old favorite in government surveillance did make an appearance.

*Resources
*Katina Michael (co-chair of the conference) and
Jamie Winterton, speaker, contributor, and author of Creating the Public Interest Technologies of the Future - Learning to Love the “Wicked Problem”
Salah Hamdoun's paper: Technology and the Formalization of the Informal Economy  

*Social Media
*@Podcastprivacy, @heartofprivacy, @euroPaulB, @trustArc, @ASU, @katinamichael, @IEEESSIT, @j_winterton
ListenNov 25, 202049 minSalah HamdounEthical AI
78
BarCodeChris GlandenShowLinkContentyze with Przemek ChojeckiI have the privilege of speaking with an AI trailblazer and a member of Forbes 30 Under 30, Przemek Chojecki. We discuss "Contentyze", a platform he created that aims to fix the inefficiencies in journalism with automated content generation. We also talk Machine Learning, Deepfake Technology, and also where the intersection of AI and Cybersecurity meet. The virtual bartender makes an epoch drink, "The Anomaly".ListenNov 23, 202046 minPrzemek ChojeckiAI trailblazer Przemek Chojecki
79
Cyberwire DailyShowLink
Ups and downs in the cyber underworld. Enduring effects of COVID-19 in cyberspace. Safer online shopping. “Take me home, United Road, to the place I belong, to Old Trafford, to see United…”
Qbot is dropping Egregor ransomware, and RagnarLocker continues its recent rampage. Cryptocurrency platforms troubled by social engineering at a third party. TrickBot reaches version 100. Stuffed credentials exposed in the cloud. COVID-19 practices may endure beyond the pandemic. Advice for safer online shopping over the course of the week. Malek Ben Salem from Accenture Labs has methods for preserving privacy when using machine learning. Rick Howard digs deeper into SOAR. And someone’s hacking a Premier League side.
For links to all of today's stories check out our CyberWire daily news brief:
https://www.thecyberwire.com/newsletters/daily-briefing/9/226
ListenNov 23, 202025 minPreserving privacy when using machine learning
80
Internet of Things PodcastStacey HigginbothamShowLink
Episode 295: Project CHIP goes commercial and the Eero Pro review
This week’s podcast kicks off with the news that Project Connected Home over IP (CHIP) will also have a commercial element focused on offices, apartments, and public buildings. Then we focus on edge computing with a new way to bring machine learning to the edge and Arm expanding its free IP license program to some … Continue reading Episode 295: Project CHIP goes commercial and the Eero Pro review

The post Episode 295: Project CHIP goes commercial and the Eero Pro review appeared first on IoT Podcast - Internet of Things.
ListenNov 19, 202058 minNww way to bring machine learning to the edge
81
CaveatDave BittnerBen YelinShowLinkPlaying into the hands of our adversaries.Ben looks facial recognition technology being used on protesters, Dave's got the story of ICE and DHS buying up moment-by-moment mobile device location data, and later in the show Dave's conversation with Jamil Jaffer from IronNet Security.
While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. 
Links to stories:

Facial recognition used to identify Lafayette Square protester accused of assault

DHS Authorities Are Buying Moment-By-Moment Geolocation Cellphone Data To Track People

Got a question you'd like us to answer on our show? You can send your audio file to caveat@thecyberwire.com or simply leave us a message at (410) 618-3720. Hope to hear from you. 
Thanks to our sponsor, KnowBe4.
ListenNov 18, 202038 minJamil Jaffer, IronNetFacial recog tech being used on protesters
82
Talk Data To Me!Daragh O BrienJoshuaShowLinkTalk Data To Me - Episode 11: There might be some profanityDaragh and Joshua from the Castlebridge team are joined this month by Alexander Hanff from our friends in Think Privacy in Sweden.

The topic of the podcast was the challenges in the second wave of tech solutionism for pandemic response. The topics discussed included the use of AI tools for social distancing enforcement, the importance of DPIAs that ACTUALLY follow through on things, and the issues of data quality and data governance that can arise with relatively new technologies with high error rates.

A key question raised is this: if a mix of technologies with varying error rates are combined and sold as a solution for public liability risks in a public health crisis and they fail... who is responsible and liable? And is the social and societal impact worth it if we are just implementing public health theatre when the investment and resources might better be spent on Public Health Doctors and Public Health Informatics?

Note: Alexander gets a bit passionate so there might be some profanity...
ListenNov 18, 202049 minAlexander HanffUse of AI tools for social distancing enforcement
83
EDRM Global Podcast Network
John TredennickTom GricksShowLinkTAR Talk: Hosts John Tredennick, Tom Gricks and Dr. Jeremy Pickens welcome Herb Roitblat to the porch.John, Tom and Dr. J discuss AI with Herb Roitblat and Herb's book on AI: Algorithms are Not Enough: Creating General Artificial Intelligence, published by MIT press. Common sense, employment and the law, deep learning and more.ListenNov 18, 202029 minHerb RoitblatCreating General Artificial Intelligence
84
The New OilShowLinkNovember 15, 2020This week is a short episode. This week I discuss several data breaches, making informed decisions, and the future of surveillance with connected cars and facial recognition. Full show notes: https://thenewoil.xyz/20201115.htmlListenNov 15, 202019 minFuture of surveillance with facial recog
85
Cyberwire DailyShowLinkMalek Ben Salem: Taking those challenges. [R&D] [Career Notes]Americas Security R&D Lead for Accenture Malek Ben Salem shares how she pivoted from her love of math and background in electrical engineering to a career in cybersecurity R&D. Malek talks about her interest in astrophysics as a young girl, and how her affinity for math and taking on challenges lead her to a degree in electrical engineering. She grew her career using math for data mining and forecasting eventually pursuing a masters and PhD in computer science where she shifted her focus to cybersecurity. Malek now develops and applies new AI techniques to solve security problems at Accenture. We thank Malek for sharing her story with us.ListenNov 15, 20206 minAI for security
86
Voices of Data ProtectionShowLinkBalancing data protection and productivity in the digital eraIn this episode we discuss how the pandemic and remote work have accelerated the need for compliance and how organizations are navigating this new landscape. The explosion of data combined with remote work brings to the forefront the need for automated solutions that leverage the best of machine learning and artificial intelligence to protect data and stay compliant while enabling productivity.

Click here for transcript of this episode.

Bhavanesh Rengarajan | LinkedIn [host]

Rudra Mitra | LinkedIn [guest]

Microsoft Security | YouTube | Twitter | LinkedIn

Microsoft Information Protection and Governance

Microsoft Tech Community Security and Compliance Blog

Subscribe to Voices of Data Protection at aka.ms/VoicesofDataProtection

Listen and subscribe to other Microsoft podcasts at aka.ms/microsoft/podcasts
ListenNov 12, 202014 minRudra MitraAI: protect data, stay compliant, enable productivity
87
Serious PrivacyPaul BreitbarthK RoyalShowLink
Data Science and Privacy - sugarcoated or straight up? It Depends (with Katharine Jarmul of Cape Privacy)
Privacy and data protection are not just a job for lawyers or professionals who specialize in privacy - not anymore. Technology plays an important role in ensuring personal data can remain private. Ensuring that personal data is secure but useful requires a level of skill found in data scientists.

In this episode of Serious Privacy, Paul Breitbarth and K Royal searched for just such a skilled individual,Katharine Jarmul, the Head of Product at Cape Privacy, and a data scientist. Cape Privacy is a New York-based company assisting others with machine learning, data security and adding value to data. Katharine explains what data science actually is, how to keep data private, useful and valuable at the same time, and how to create synthetic data appropriately. Also a big question when it comes to powerful technology revolves around the ethics and the investment of individual technologists in the ethics of privacy.

Join us as we discuss these topics and more, such as GPT-3, “this person…
ListenNov 10, 202046 min
Katharine Jarmul, Cape Privacy
Cape Privacy
88
Cyberwire DailyShowLink
Supply chain security. New cyberespionage from OceanLotus. Data breaches expose customer information. And GCHQ has had quite enough of this vaccine nonsense, thank you very much.
Alerts and guidelines on securing the software supply chain (and the hardware supply chain, too). OceanLotus is back with its watering holes. Two significant breaches are disclosed. Malek Ben Salem from Accenture Labs explains privacy attacks on machine learning. Rick Howard brings the Hash Table in on containers. And, hey, we hear there’s weird stuff out there about vaccines, but GCHQ is on the case.
For links to all of today's stories check out our CyberWire daily news brief:
https://www.thecyberwire.com/newsletters/daily-briefing/9/217
ListenNov 9, 202025 minPrivacy attacks on machine learning
89
Smashing SecurityGraham CluleyCarole TheriaultShowLink203: Testing times, naming names, and the bald truth about AIStudents are being spied on as they do online exams, how did a televised football match reveal the truth about artificial intelligence, and what on earth is the Canny Lumpsucker vulnerability?
All this and much much more is discussed in the latest edition of the "Smashing Security" podcast by computer security veterans Graham Cluley and Carole Theriault, joined this week by Thom Langford from The Host Unknown podcast.
Plus don't miss the second part of our featured interview with LastPass's Dalia Hamzeh.
Visit https://www.smashingsecurity.com/203 to check out this episode’s show notes and episode links.
Follow the show on Twitter at @SmashinSecurity, or on the Smashing Security subreddit, or visit our website for more episodes.
Remember: Subscribe on Apple Podcasts, or your favourite podcast app, to catch all of the episodes as they go live. Thanks for listening!
Warning: This podcast may contain nuts, adult themes, and rude language.
Theme tune: "Vinyl Memories" by Mikael Manvelyan.…
ListenNov 4, 202071 minThom Langford, The Host UnknownTesting times, naming names, and the bald truth about AI
90
CaveatDave BittnerBen YelinShowLinkIoT device risk: can legislation keep up?Dave wonders if Facebook’s Mark Zuckerberg isn’t asking congress to throw him in the briar patch. Ben looks at the massive amount of data politicians gather on voters, our Listener on the Line came about when Dave was chatting with his pal Jason DeFillippo on Grumpy Old Geeks about facial recognition software, and later in the show, Dave has a conversation with Curtis Simpson, CISO at Armis, he shares his thoughts on some recent IoT legislative updates.
While this show covers legal topics, and Ben is a lawyer, the views expressed do not constitute legal advice. For official legal advice on any of the topics we cover, please contact your attorney. 
Links to stories:

How politicians target you: 3,000 data points on every voter, including your phone number

Facebook's Mark Zuckerberg calls for Section 230 reform, 'Congress should update the law'

Activists Turn Facial Recognition Tools Against the Police

Got a question you'd like us to answer on our show? You can send your audio f…
ListenNov 4, 202044 minGrumpy Old Geeks on facial recog
91
Cyber Task ForcePaul C DwyerShowLinkA Conversation with Mark Little - KinZenMark Little is an entrepreneur and journalist. He spent 20 years in broadcast news, as a reporter and presenter for RTE. He was the station’s first Washington Correspondent. In 2001, he won the Irish TV Journalist of the Year award for his reporting from Afghanistan. He was also anchor of the current affairs programme Prime Time. In 2010, he founded the world’s first social news agency Storyful, which was eventually sold to News Corp. He worked for Twitter, as Vice President for Media in Europe and Managing Director of its International Headquarters. In 2017, he co-founded Kinzen, which combines editorial skills and artificial intelligence to protect and promote quality information.ListenNov 3, 202063 minMark LittleKinzen: editorial skills, AI to protect/promote quality info
92
Privacy PathsHelena WoottonStewart DresnerA privacy and tech innovation win-win in the ICO’s regulatory sandboxInnovation and privacy are often regarded as incompatible. They were brought together for mutual advantage in the United Kingdom’s ICO’s regulatory sandbox. Onfido, which provides proof of identity using facial recognition, has now emerged from the sandbox in its first cohort. We discuss with Onfido's Director of Privacy and the ICO’s Head of Assurance how they assessed the risks and took the plunge. Find out how both sides have benefited and learned from their experience of this one year programme. 

Participants: 

* Chris Taylor, Head of Assurance (Supervision), Information Commissioner’s Office 
* Neal Cohen, Director of Privacy, Onfido 
* Helena Wootton, Correspondent and Data Lawyer, Privacy Laws & Business 
* Stewart Dresner, Chief Executive, Privacy Laws & Business 

The article on Onfido’s rationale for entering the sandbox, published in the September 2019 edition of Privacy Laws & Business United Kingdom Report is available free of charge by e-mailing info@privacylaws.com.

If you’re interested in applying for the ICO’s Regulatory Sandbox, you can find more information on their website.
ListenOct 27, 202027 minChris Taylor, ICONeal Cohen, Onfido Proof of identity using facial recog
93
EDRM Global Podcast Network
Mary MackKaylee WalstadShowLinkIllumination Zone: Mary Mack & Kaylee Walstad Sit Down with Richard TromansRichard Tromans, founder, Artificial Lawyer and Legal Economist at Tromans Consulting 10/16/2020.ListenOct 27, 202025 minRichard TromansArtificial Lawyer
94
Law Bytes
Michael GeistShowLinkEpisode 67: Tamir Israel on Facial Recognition Technologies at the BorderFacial recognition technologies seem likely to become an increasingly common part of travel with scans for boarding passes, security clearance, customs review, and baggage pickup just some of the spots where your face could become the source of screening. Tamir Israel, staff lawyer at CIPPIC, the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic at the University of Ottawa, recently completed a major study on the use of facial recognition technologies at the border. He joins the podcast to discuss the current use of the technologies, how they are likely to become even more ubiquitous in the future, and the state of Canadian law to ensure appropriate safeguards and privacy protections.ListenOct 26, 202031 minTamir IsraelFacial Recognition Technologies at the Border
95
Data Privacy DetectiveJoe DehnerShowLinkEpisode 55 - Differential Privacy and Academic ResearchScience and knowledge advance through information gathered, organized, and analyzed. It is only through databases about people that social scientists, public health experts and academics can study matters important to us all. As never before, vast pools of personal data exist in data lakes controlled by Facebook, Google, Amazon, Acxiom, and other companies. Our personal data becomes information held by others. To what extent can we trust those who hold our personal information not to misuse it or share it in a way that we don’t want it shared? And what will lead us to trust our information to be shared for database purposes that could improve the lives of this and future generations, and not for undesirable and harmful purposes?

Dr. Cody Buntain, Assistant Professor at the New Jersey Institute of Technology’s College of Computing and an affiliate of New York University’s Center for Social Media and Politics discusses in this podcast how privacy and academic research intersect.

Facebook, Google, and other holders of vast stores of personal information face daunting privacy challenges. They must guard against unintended consequences of sharing data. They will not generally share with and will not sell to academic researchers access to databases. However, they will consider and approve collaborative agreements with researchers that result in providing academics access to information for study purposes. This access can aim to limit access to identifying individuals through various techniques, including encryption, anonymization, pseudonymization, and “noise” (efforts to block users from being able to identify individuals who contributed to a database).

“Differential privacy” is an approach to the issues of assuring privacy protection and database access for legitimate purposes. It is described by Wikipedia as “a system for publicly sharing information about a dataset by describing the patterns of groups within the dataset while withholding information about individuals in the dataset.” The concept is based on the point that it is the group’s information that is being measured and analyzed, and any one individual’s particular circumstances are irrelevant to the study. By eliminating the need for access to each individual’s identity, the provider of data through differential privacy seeks to assure data contributors that their privacy is respected, while providing to the researcher a statistically valid sample of a population. Differentially private databases and algorithms are designed to resist attacks aimed at tracing data back to individuals. While not foolproof, these efforts aim to reassure those who contribute their personal information to such sources that their private information will only be used for legitimate study purposes and not to identify them personally and thus risk exposure of information the individuals prefer to keep private.

“Data donation” is an alternative. This provides a way for individuals to provide their own data to researchers for analysis. Some success has been achieved by paying persons to provide their data or allowing an entity gathering data for research to collect what it obtains by agreement with a group of persons. Both solutions have their limits of protection, and each can result in selection bias. Someone active in an illicit or unsavory activity will be reluctant to share information with any third party.

We leave “data traces” through our daily activity and use of digital technology. Information about us becomes 0’s and 1’s that are beyond erasure. There can be false positives and negatives. Algorithms can create mismatches, for example a mistaken report from Twitter and Reddit identifying someone as a Russian disinformation agent.

If you have ideas for more interviews or stories, please email info@thedataprivacydetective.com.
ListenOct 25, 202024 minDr. Cody Buntain, New Jersey Institute of TechnologyDifferential Privacy & Bias
96
Cyberwire DailyShowLinkJust saying there are attacks is not enough. [Research Saturday]Ben-Gurion University researchers have developed a new artificial intelligence technique that will protect medical devices from malicious operating instructions in a cyberattack as well as other human and system errors. Complex medical devices such as CT (computed tomography), MRI (magnetic resonance imaging) and ultrasound machines are controlled by instructions sent from a host PC. Abnormal or anomalous instructions introduce many potentially harmful threats to patients, such as radiation overexposure, manipulation of device components or functional manipulation of medical images. Threats can occur due to cyberattacks, human errors such as a technician's configuration mistake or host PC software bugs.
As part of his Ph.D. research, Tom Mahler has developed a technique using artificial intelligence that analyzes the instructions sent from the PC to the physical components using a new architecture for the detection of anomalous instructions.
Joining us in this week's Research Saturday…
ListenOct 24, 202028 minAI technique to prot med devices fr malicious hacks
97
Machine EthicsBen ByfordShowLink47. Robot Rights with David GunkelThis episode we're chatting with David Gunkel on AI ideologies, why write the Robots Rights book, what are rights and categories of rights, computer ethics and hitch bot, anthropomorphising as a human feature, supporting environmental rights through this endeavour of robot rights, relational ethics, and acknowledging the western ethical view point.ListenOct 20, 202055 minDavid GunkelAI ideologies
98
Tripwire Cybersecurity PodcastTim ErlinFace off: Debating Facial Recognition with Thom Langford & Paul EdonRecovering CISO and Director of (TL)2 Security Thom Langford joins the show to debate Tripwire’s Paul Edon on facial recognition vs. security.

Oct 9, 2020Thom LangfordFacial recog vs. Security
99
Privacy AdvisorAngelique CarsonShowLinkThe Privacy Advisor Podcast: How to know who's tracking your dataAs a consumer, it can be really difficult to figure out who's tracking your data online. Many companies hide behind algorithms claiming they're the "secret sauce" to their business model, which sometimes frustrates regulators and laymen alike. That's why award-winning journalist Julia Angwin and investigative journalist Surya Mattu, both of the non-profit news organization The Markup, recently developed and released Blacklight, a web site that allows users to scan any site for potential privacy violations, including what's being tracked and who's sharing your personal data. In this episode of The Privacy Advisor Podcast, Angwin and Mattu talk about the tool and why the team is passionate about user empowerment.ListenOct 9, 202038 minJulia Angwin, The MarkeupSurya Mattu, The MarkupThe Markup & Blacklight
100
Machine EthicsBen ByfordShowLink46. Belief Systems and AI with Dylan Doyle-BurkeThis month we're chatting with Dylan Doyle-Burke of the Radical AI podcast about starting the podcast, new religions and how systems of belief relate to AI, faith and digital participation, digital death and memorial, what does it mean to be human, and much more...ListenOct 4, 202054 minDylan Doyle-BurkeHow systems of belief relate to AI