Machine Learning: Automated Improvement of Algorithms for Accurate Decisions - Essay Sample

Paper Type:  Essay
Pages:  7
Wordcount:  1799 Words
Date:  2023-07-04

Introduction

Machine Learning entails the study of different computer algorithms that tend to be improved automatically with experience. Importantly, it can be categorised as a subset of artificial intelligence. The concept of machine learning builds a mathematical model based on sample data and information, referred to as "training data", to enable it to make accurate decisions without the need of being programmed to do the same (Bischl et al., 2016). It, therefore, uses algorithms to make informed decisions based on what it has already learned from the sample data. It involves computers discovering how much they can perform a particular task without any program involved. Machine learning is increasingly becoming among the most significant prediction, and decision-making tools in various practise ranging from medicine, the law as well as public policy, globally. For instance, political strategists consistently use machine learning to focus and determine their next moves. Additionally, machine learning systems also enable different police departments to predict cases of human trafficking and various hotspots. In this essay, a clear identification of what learned models predict and the instances that we can or cannot truly trust the outcome of their prediction is necessary for understanding machine models.

Trust banner

Is your time best spent reading someone else’s essay? Get a 100% original essay FROM A CERTIFIED WRITER!

Prediction in Learned Models

In machine learning, prediction refers to the particular output of an algorithm after training on a historical dataset and its application to a new piece of data during the forecasting process of the likelihood of a specific outcome. For instance, the learned model in the process can predict whether or not customers in a particular business will churn after a month (Christensen & Lyons, 2017). To predict this, the algorithm provides probable values for unknown variables for every particular record in the new piece of data. This, therefore, like Gupta, Aggarwal & Bareja (2018) explain, allows the model-builder to identify and predict what the value will most likely be. Moreover, learned models predict the likely outcomes of different questions based on historical data in a business or other field, involving the likelihood of instances like fraudulent activities. This, therefore, enables individuals to take the necessary actions based on the outcome.

Instances That We Can Trust Learning Machine Models

Trust and transparency among machine learning models are essential to ensure effective operations. To interact with machine learning models and trust their predictability outcomes, humans should endeavour considering various aspects that will provide the effectiveness of these models. For instance, Azevedo, Raizer & Souza (2017) suggest that individuals need to understand the systems senses the different environments, how it tends to make decisions and acts and more importantly, how it teams with humans. Furthermore, on the same note, humans should determine how the teaming strategies with the learned models change with time, based on the different factors or objectives. Moreover, Christensen & Lyons (2017) explains that by understanding how a machine learning model interacts with the environment will enable them to know how it perceives data, the kinds of sensors it has and more importantly how it uses the different sensors in predicting. As a result, it is expected that by relating well with the environment, the machine learning models will be able to communicate well with humans. As such, in this case, individuals can be able to trust the models along with the different predictions from the same.

Furthermore, humans should relate and understand how different learning machine models make decisions in various aspects. Getting a clear picture of how the models make decisions and how these decisions translate into actions will enable individuals to trust the models towards making even further decisions. According to research by (Christensen & Lyons (2017), transparency methods in the form of decision rationale can increase trust for the different recommender machine models in various industries. Furthermore, there is evidence that rationale can increase user trust along with reliance on the decision aid while at the same time reducing verification of the automation's recommendation (Azevedo, Raizer & Souza, 2017). In the case where humans can understand the logic behind all the recommendations by learning machine models, they can, therefore, be able to trust them along with the result of the predictions that they can make. As such, humans need to understand how the decision logic by the different learned models change and more importantly, the reason as to why it changes. Besides, on the same note, the trust will build further if they determine the particular conditions that drive the strategy change, the specific thresholds for these changes and the underlying assumptions of the specific models.

Additionally, as pointed earlier, perhaps the most important case that we can trust machine learning is how the specific models will interact with humans and more importantly, how the teaming strategy changes with time, based on the state of humans as well as the situational constraints. For instance, the teaming strategy, in this case, may entail the division of labour between the machine and humans. On the same note, this may also entail the intent that the system has towards humans as well as meaningful exposure between the two subjects to relate and work harmoniously. Christensen & Lyons (2017) points out that humans can trust the models upon the realization that there is a meaningful division of labour between them and the machine learning models, towards accomplishing a specific task. However, more trust should be built upon the determination of both real-time and future projections of the, to establish if the present behaviours can change based on various factors (Hainmueller & Mummolo, 2019). The model should, therefore, be able to visually represent the division of labour for the multiple tasks. This will then enable both humans and the machine learning models to develop a shared awareness of both the current as well as the future teamwork context.

It is also imminent that advancement and success in physiological assessment and intelligent algorithms will enhance the trust between human and the machine learning models, as required by the different situational demands. For example, as Christensen & Lyons (2017) point out, the U.S Air Force has established an automated system based on learned machine models, named the Automatic Ground Collision Avoidance System (AGCAS), on the F-16s. With many proofs, there is trust that this system has the ability to take control from a pilot in the event where it predicts a forthcoming crash with the ground, thereby avoiding it or at least reducing the impact from the same. Humans can trust this system of machine learning model, as it only activates at the very last possible situation, a fact that avoids any nuisance to the activations as well as the unnecessary interference with the pilot. According to Hainmueller & Mummolo (2019), the model's innovative that enables it to consider the pilot's perceived nuisance threshold drove most of its success to the market. As such, it is this particular understanding that since influenced the trust between the pilot and the model.

Moreover, there should also be an adequate understanding of the intention and consistency of the model towards humans. In this case, people will be required to comprehend all the goals of the machine learning model, more importantly, how the system priorities various goals on a variety of several constraints (Shokri, 2019). Therefore, in the case that humans develop the understanding of goal prioritization as well as how they fluctuate across different situations will enable humans to develop trust to the models and therefore work with them efficiently.

Instances that Caution is Required in Machine Learning Models

With appropriate training, machine learning models can yield systems that are robust and able to meet and exceed human abilities. However, as Shokri (2019) points out, the training process can, at times have unpredictable results, producing inexplicable behaviour described as the black box. As such, humans, therefore, need to be cautious when dealing with these models. Specifically, there are three main instances that learning machine models can create challenges to human beings and therefore, the need to ensure cautiousness (Hainmueller & Mummolo, 2019). A machine learning model can adopt behaviours that may be extremely difficult to comprehend and condense to the basic if-then statements. As such, due to the lack of semantic space, the developed models may have at difficulty communicating with humans. Indeed, what humans see as an error may be entirely logical to the machine and therefore calls for extensive caution.

Moreover, as another challenge, a real error on the model may be difficult for humans to identify, especially if they do not have an understanding of the model's basis towards making credible decisions (Shokri, 2019). Furthermore, a machine learning model should demonstrate evidence of dynamic behaviour that challenges the notion of predictability. Failure to this may result in a misunderstanding by the humans and hence fail to relate. Humans should, therefore, understand that machine learning may never be absolute and entirely knowable. Yet, they are trusted to provide users with predictable results that are meant to reduce uncertainty, increase the level of rationale as well as sharing lessons learnt through via the informal networks as well as informational networks (Bischl et al., 2016).

Conclusion

Machine learning has become among the most applicable models in different fields in the current world, including medicine, business, data scene, among others. Machine learning systems enable the prediction of different situations in these areas and therefore towards anticipating what the future holds. As identified in the discussion, aspects such as intent and intent and consistency. There is a need to understand the logic of the machine learning fully, how it makes decisions, it is understanding in the environment, its relationship with human beings as well as the assessment of its underlying assumptions. In these instances, human beings can, therefore, be able to relate and trust the machine learning models efficiently. However, as identified in the paper, despite its practical use and interaction with human beings, there is a need to ensure cautiousness due to three main reasons. They include difficulty to identify errors on it, challenging to understand hence leading to difficulty in communication between the humans and the systems as well as test the notion of predictability and distort the relations. Humans should, therefore, understand that machine learning models can be beneficial, but at times, pose challenges, and they, therefore, need to be cautious.

References

Azevedo, C. R., Raizer, K., & Souza, R. (2017, March). A vision for human-machine mutual understanding, trust establishment, and collaboration. In 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA) (pp. 1-3). IEEE.

Bischl, B., Lang, M., Kotthoff, L., Schiffner, J., Richter, J., Studerus, E., ... & Jones, Z. M. (2016). mlr: Machine Learning in R. The Journal of Machine Learning Research, 17(1), 5938-5942.

Christensen, J. C., & Lyons, J. B. (2017). Trust between humans and learning machines: developing the gray box. Mechanical Engineering, 139(06), S9-S13.

Gupta, A., Aggarwal, A., & Bareja, M. (2018). A Review on Classification Using Machine Learning. International Journal of Information Systems & Management Scien...

Cite this page

Machine Learning: Automated Improvement of Algorithms for Accurate Decisions - Essay Sample. (2023, Jul 04). Retrieved from https://proessays.net/essays/machine-learning-automated-improvement-of-algorithms-for-accurate-decisions-essay-sample

logo_disclaimer
Free essays can be submitted by anyone,

so we do not vouch for their quality

Want a quality guarantee?
Order from one of our vetted writers instead

If you are the original author of this essay and no longer wish to have it published on the ProEssays website, please click below to request its removal:

didn't find image

Liked this essay sample but need an original one?

Hire a professional with VAST experience and 25% off!

24/7 online support

NO plagiarism