Search
  • Tobias Straube

Explainability of AI output: Revisiting our understanding of morality

Updated: Oct 8, 2018

DARPA researchers want to know how and why machine learning algorithms get it wrong.

DARPA researchers want to know how and why machine learning algorithms get it wrongCertainly, there are some very detrimental consequences of inexplicability. What is the morality of in/transparency? In which domains do we need public or multi-stakeholder transparency to mitigate errors, biases, abuse or manipulation, etc.? Conversely, are there areas where it's justifiable that explanatory functions should be in the hands of only one party? - Public health decisions for which mass panics are a risk, individual or group identity issues where psychological tensions could arise, competitive decisions in corporate strategy, national security or intel analysis, crime investigations, international trade negotiations, dating / trading / shopping agents, personal bouncer bots, etc?  Are the rules for the AI-age different than in pre-AI era, because it penetrates our lives more deeply, pervasively, and with more 2nd and 3rd order unintended consequences?

Check the full article here.

38 views

Explore

Contact 

San Francisco 

Austin 

Frankfurt

groth@cambrian.ai

 +1 415-205-0807

Follow us

  • Twitter Social Icon
  • Facebook Social Icon
  • LinkedIn Social Icon
  • YouTube Social  Icon

Twitter

Facebook

LinkedIn

Youtube