One machine learning application with ethical implications is automated driving cars. Vehicles that drive automatically use machine learning algorithms to analyze data such as cameras in order to make complex navigating decisions without the need for human intervention. However, there do exist practical harms for this application. There is a potential risk for accidents to be caused by errors in these algorithms if they are not properly tested before released into the real world; this could potentially be the fault of the humans that created these algorithms. Said accidents can harm passengers, other road users, and damage external property. At the same time, there exist ethical conflicts with safety and efficiency; for example, an automation algorithm could prioritize timesaving over safety, which could lead to unnecessary driving aggression. Mitigation of these issues is indeed possible; one way to do so would be to establish concrete ethical guidelines for the creation and release of automated driving vehicles. Said algorithms should indeed priorities safety over efficiency and should be transparent. Said guidelines should be developed by multiple people, including regulators, manufacturers, and even the public. Including the interests of multiple parties would ensure the solution is reflective of societal issues. Implementation of this solution would require technical innovation and cooperation from multiple parties; this technical innovation is integral to developing safe and effective algorithms. Political will is also necessary to establish the aforementioned ethical guidelines, and multi-party cooperation is necessary to ensure all involved groups are working towards responsible deployment of these vehicles and the development of reliable, well-tested algorithms. -- An example of a real-world problem that could be solved by the implementation of a linear perception algorithm could be spam email classification. The main goal would be to simply predict whether an email is spam or not. The perceptron would predict the class label, which would be one of “spam” or “not spam.” The presence of certain keywords, the email sender, the repetition of certain words, and the email footer could all be features used to train the perceptron. The weight vector w would initially be created with small random values, and the decision boundary would either be a two-dimensional plane in a 3D space or a line in a 2D space. The dimension of the decision boundary would be based on the number of features used in the training process. Linear perceptron would make it possible to accurately classify an email as spam or not based on its features and characteristics.
6
2/4/2023, 10:40:46 PM