Table of Contents
Proposal for Deployment
Ethical, Professional, Governance and Regulatory Considerations of Deployment
Deployment and Success Criteria.
The deployment of AI has been creating transforming impacts over different sections of the society. Many industries and organizations have been making use of AI to cater to the needs of their business and also innovate the processes carried out daily in their organizations. AI and ML techniques have been widely used in an emerging phenomenon known as BMI or Business Model Innovation which seeks at implementing newer techniques and methodologies so as to advance business model with the growing standards and increasing productivity through increased emphasis on integrating technology and businesses (Katsamakas and Pavlov 2020). Out of the many firms investing in the usage of AI in their business model, one such industry which has also benefited from the technology is the actuarial sector. Actuarial consultants are known for providing their service to the society through advising them on insurances, investments and other financial decisions through various measurement processes. This business sector relies on a range of statistical tools and huge amounts of data along with contingency plans to formulate plans best suited for their clients. Actuarial firms perform data analytics by making use of different data sets depending on the creation of hypothetical scenarios relating to the trends or changes which can be expected in the future. AI and ML have also been related to the processing of data sets. Both the technologies have been undergoing rapid advancements to mimic human like approaches to various tasks. Hence, the implementation of AI and ML to the actuarial sector can also help the sector benefit through efficient and effective decision-making process by making use of even larger data sets than a human can process. Actuaries and AI and ML have in common the extensive usage of data sets to produce meaningful results. Actuaries make use of statistics, finances and business models to help generate policies for organizations and estimating the chances of being exposed to unwanted events such as disabilities and loss of property. Actuary sciences are expected to be revolutionised with the implementation of AI and ML in the sector. Other sectors which have been depending on the implementation of AI and ML in their business models also depend on the intelligence levels associated to the inputs for these technologies to as to provide a credible actuary process. With the usage of AI and ML, actuary science can help save the time spent by human actuaries to process data sets and crunch numbers. Moreover, AI and ML will also be able to assist actuaries through an enhanced analytical accuracy, effective decision making, wide and in-depth comprehensive assessment of risk, ability to solve complex situations, effective understanding of business contexts, decrease the response time towards changes, increased levels of automation and reduced associated cost (Riley 2020). The nature of risk based capital and other requirements of the reporting sector have left no other option for actuaries rather than to invest in technology. Over the years, actuarial science has been not much explored due to the level of risk associated to the output and the impact of ineffective decision making on clients. Most of the tasks carried out by actuaries have been manual due to the fact that although automation has been gaining pace in different sectors of the society, actuarial solutions are still being applied to every step of the process including data manipulation, setting of assumptions and selection of methodologies. The implementation of AI has been altering the actuarial landscape through various advantages and has helped the actuarial profession become a structured, consistent and unbiased method of performing different tasks which has led to a substantial decrease in human intervention. With the implementation of AI, the professional actuaries would no longer be needed to manually solve complex relations between data sets and provide useful knowledge form data processing and crunching numbers. Rather manual actuaries can be utilised in conducting analysis and also providing recommendations in areas they expertise in such as forming marketing strategies for financial products and managing the risk associated to enterprises (Yeo 2017).
Considering the advantages AI and data sciences which have helped create for businesses, AI has helped create a transformational impact on institutions and the general-purpose technologies used by them. With increasing advancements, AI and data sciences have let to unrealistic expectations alongside other leading technologies such as machine learning, neural networks and other leading forms of technology helping business achieve new heights with its real capabilities (Brynjolfsson ad Mafee 2017). AI can be implemented in various processes at an organization. Various studies have been conducted to understand the concepts related to the implementation of AI in Business Model Innovations (BMI). As per Reim, Astrom and Eriksson (2020), the process of implementation of AI for the purpose of can be divided into four steps. The first step includes the understanding of AI and its potential alongside the capabilities of an organization which needed to be transformed through AI. Th second step includes understanding of the current business model, the potential of business model innovation and the business ecosystem role. The third step revolves around the development and refining capabilities needed for the implementation of AI and the fourth step includes the steps taken to reach the organisational acceptance and development of internal competencies. One of the most important tasks carried out in actuarial sciences is the forecasting of interest rates. Some of the applications of interest rate forecasting in actuarial sciences include asset liability management, the valuation of life and pension liabilities and modelling the capital. Considering the example presented by Panlilio et al (2018), some financial institutions depend on the usage of machine learning algorithms to predict the sentiments of central financial committees and their decision regarding the working of these institutions. For example, the Bank of England facilitates decision making through meetings to carry out the decision-making process related to interest rates and quantitative easing. These decision-making meetings are documented in the form of speeches, inflation reports and press releases which are open to public access. The documents which are text based contain unstructured and possess complex variations. Hence, machine learning techniques can be well suited to perform sentiment analysis and predict the future values and variations in base interest and qualitative easing. Machine learning codes can be trained to analyse documents though annotating very long documents of text which can take long durations of time to be processed by human power. According to Arras et al (2017), one method to analyse text documents can be through convolutional neural networks. Convolutional neural networks have been a widely used concept used to carry out natural language processing. CNN has also been used to analyse images and other forms of input. CNN based NLP is based on the formation of data matrixes in which each row is used to represent a word or a sentence. Hence, to analyse text documents, software such as word2vec can be utilised such that it can generate vectors for the words which are fed as input into the system. The tool has been created by Google and has been used to create features based on the input without human intervention. Once a pre-trained model is achieved using this software, a CNN can be trained based on convolutional layers, pooling layers, dropout layers and parametric rectified linear units (Ouyang, Zhou, Li and Liu 2015). Another implementation of data science to the actuarial science field can be the implementation of machine learning to compute experience analysis on customer segmentation. Insurance firms generate personalised and user-friendly insurance plans depending on various parameters related to their customers including age, location, financial sophistication, etc. Data segments can be analysed to conduct customer segmentations using K means clustering based on RFM models. An RFM model is used a describe a marketing analysis tool used by organizations to identify the loyal customer section based on recency, frequency and monetary value. Recency is used to describe about how recent a customer has made use of a service, frequency is used to describe the frequency in services requested by a customer and monetary value is used to describe how much money does a particular customer spend over these services. RFM model helps firms and businesses understand and predict potential chances of purchases by loyal customers and also predicts ways to convert occasional customers into frequent customers. This model can be setup with K means clustering to perform quantitative insights. K means clustering can be performed over a range of raw data in which the rows of a matrix are used to represent an object and the columns are used to represent the quantitative characteristics of those objects which are also known as clustering variables. Different clusters based on different characteristics can be formed. Considering the implementation of ML based customer segmentation in actuarial science, each cluster can be based on the type of insurance, client details such as monetary value, insurance amount, etc (Vohra et al 2020).
Ethics can be described as the collection of values and principals which address the questions related to the wrongs and rights related to a certain issue. It lays down the reasons to act towards a certain happening or to refrain from it. Ethical considerations can be similarly described as the processing of addressing ethics at the individual and society levels. Ethics in data sciences can be formulated over different background. Different steps which together form the process of data analysis are based on a different set of ethics. Data related challenges are one of the leading ethical situations which can arise during the collection and processing of the input data. Ethical considerations are required to provide data privacy, anonymity, accuracy and validity during the deployment of data science and AI. Data misuse is one of the leading factors related to data science and AI. Ethical considerations should also be formed during the setting up and running of analytical tools. These ethical considerations are required to provide steps and actions which can help reduce the chances of personal and group harm and also help decrease the chances related to model misuse and misinterpretations. An approach towards setting ethical considerations can be for setting up a subjective model design. Data science is a wide filed and proper ethical considerations should be made such that the output generated by the analysis should be fruitful by helping chose the correct algorithms and data sets (Saltz and Dewar 2019). Data governance has played a key role as the usage of AI and data science has been increasing leading to growth in the size of data sets being used as inputs to AI and data science algorithms. One of the goals of incorporating data governance in projects related to data science is to provide data integrity. Data integrity refers to the accuracy and consistency provided in data sets over their entire lifecycle. It can be described as the steps taken to ensure that the data sets being used as inputs in the AI and data science algorithms are accurate and cannot be tampered. Data integrity is aimed at securing the data sets during the design phase through a collection of rules, processes, standards, etc. Some of the types of integrity which can be established during the implementation of AI and data sciences is physical integrity, logical, entity, referential and domain integrity. Out of these the most emphasised categories of data integrity includes physical and logical data integrity. Physical data integrity refers to the process of data security during the storage and retrieval processes. Natural factors and human caused errors can lead to disruptions in the data sets compromising the nature and efficiency of the data. Logical data integrity refers to the protection of data from being tampered due to human errors. Creation or deletion of values from the data sets can reduce the effectiveness of data sets, hence proper standards for the usage of data sets should be produced and managed. AI and data science have been deployed in different sections of the society including governmental projects. Systems based on AI have been constantly developing and upgrading leading to necessity of rules and regulations to provide security of data sets used throughout the process and also monitor the usage of AI and data sciences such that these technologies do not cause adverse effects on human race and society. There has been a constant debate between the fact that AI and data science can be led to problematic conclusions if not handled properly as per the set industrial standards. To avert any possibilities of AI and data science leading to mishaps, many governments and organizations have formed regulatory measures which provide rules and standards needed to be followed during the entire process of AI and data science deployment. Some examples of regulations formed by different governments and organizations include GDPR and CCPA. The GDPR is used to described the General Data Protection Regulation set up by the European Union. The GPDR is used to set up the laws and regulations for the data protection and privacy of data in the European Union and the European Economic Area. The primary aim of this regulation is to provide control to people over the usage of their personal data and to ease the regulatory environment for global businesses through the unification of regulations within the union. The CCPA is a similar regulation known as the California Consumer Privacy Act of 2018 which defines the rules and regulations used to provide consumers with the control over their personal information which can be collected by different businesses for analysis.
AI and data sciences have been considered as one of the most rapidly advancing technology and have been a popular choice in reference to professional studies and profession itself. These technologies have been considered as a giant paradigm and has helped shift the modern computing sector from manual approaches to scientific and logical approaches with the help of systems which can be developed with the capabilities to think and learn. They also require a set of skills to effectively develop and manage the technology. These skills can be classified based on a phenomenon known as ESCO. ESCO is used to describe European Skills, Competencies and Occupations developed by the European Commission. Considering the relevance in the digital skills in the current ICT sector, the skills required for setting up an AI and data science system can be classified into 8 groups using ESCO. These skills include information brokerage skills, basic ICT skills, ICT technical skills, thinking skills, social interaction, application of knowledge and attitudes and values. Information brokerage skills include the ability of individuals to make use of the tools provided by the ICT sector to develop data science and AI models and for the exchange of data. Basic ICT skills include the ability to make use of the standard tools and systems used in the sector and the ICT technical skills include the ability to make use of the platforms and programming languages which in this case can include programming languages such as python and platforms such as R Studio. Thinking skills are required to facilitate mental processing of complex problems as data sciences and AI are based on core mathematical principles. Social interaction and application of knowledge help in the transfer of data in between organizations and teams such that a range of approaches can be considered during the design phase (Colombo, Mercorio and Mezzanzanica 2019). Similarly, the progress and risks of AI and data science deployments can be managed using various methods. Organizations and individuals aiming at avoiding and mitigating the unintended consequences of AI and data sciences need to develop pattern recognition systems with the respect to risks related to AI and also take into consideration of engaging the entire organization such that it can prepared for the power and responsibilities related to AI. There are various risks associated to the deployment of AI including the risks related to data, technology, lack of security, problems within models and interaction issues. The framework required to monitor the extent of safety and risks related to AI and data science can be divided into two categories namely design time framework and run time framework. Design time framework can be based on the common hazard analysis techniques such as fault tree analysis and failure mode effect analysis to determine the risks related to the project and to form mitigation plans for those risks. The run time framework can be based on the procedures required to ensure desired working of the AI and data science model. This framework can be created based on the safety backend model so as to manage the safety profile, analyse the data logs and alert the administration in cases of any deviation from desired output (Osman, Kugele and Shafaei 2019). The metrics of success can be used to study the effectiveness of the AI and data science model and its closeness to the desired result. The success of the model can be analysed against 3 key metrics namely the matching of performance against the set primary goals, performance in comparison to the user and performance against the work system. Every AI and data science model is designed to with the help of primary goals such as detection of certain features in an image or predict a particular value. In this case, the primary goals set can the closeness in prediction of the various parameters related to the economy such as interest rate, etc. The performance metric in context to the user can be used to measure the correctness of the predictions of the user of the desired output of the AI model by measuring the speed and the correctness of the model. Lastly, the performance can be analysed in regard to the work system. This analysis might take into consideration the measure of controllability to which the user can create desired outputs based on the given inputs (Hoffman, Mueller, Klein and Litman 2018).
Katsamakas, E. and Pavlov, O.V. 2019. AI and Business Model Innovation: Leveraging the AI Feedback Loop. DOI: http://dx.doi.org/10.2139/ssrn.3554286
Riley, J. 2020. AI and Machine Learning Usage in Actuarial Science. B.S. Thesis. Buchtel College of Arts and Sciences (BCAS). Available at: https://ideaexchange.uakron.edu/honors_research_projects/1081/
Yeo, N. 2017. Actuarial profession in the age of artificial intelligence and process automation. [Online]. Available at: https://www.soa.org/news-and-publications/newsletters/innovators-and-entrepreneurs/2017/november/ei-2017-iss-61/actuarial-profession-in-the-age-of-artificial-intelligence-and-process-automation/. [Accessed on 24 Oct 2020].
Brynjolfsson, E. and Mcafee, A. 2017. The business of artificial intelligence. Harvard Business Review. pp.1-20.
Reim, W., Åström, J. and Eriksson, O. 2020. Implementation of Artificial Intelligence (AI): A Roadmap for Business Model Innovation. AI. 1(2). pp.180-191.
Panlilio, A., Canagaretna, B., Perkins, S., du Preez, V. and Lim, Z. 2018. Practical application of machine learning within actuarial work by modelling, analytics and insights in data working party. [Online]. Available at: https://www.actuaries.org.uk/system/files/field/document/Practical%20Application%20of%20Machine%20Learning%20within%20Actuarial%20Work%20Final%20%282%29_feb_2018.pdf
Arras, L., Horn, F., Montavon, G., Müller, K.R. and Samek, W. 2017. " What is relevant in a text document?": An interpretable machine learning approach. PloS one. 12(8). p.e0181142.
Ouyang, X., Zhou, P., Li, C. H. and Liu, L. 2015. Sentiment Analysis Using Convolutional Neural Network. 2015 IEEE International Conference on Computer and Information Technology. DOI: 10.1109/CIT/IUCC/DASC/PICOM.2015.349
Vohra, R., Pahareeya, J., Hussain, A., Ghali, F. and Lui, A. 2020. Using Self Organizing Maps and K Means Clustering Based on RFM Model for Customer Segmentation in the Online Retail Business. International Conference on Intelligent Computing. pp. 484-497.
Saltz, J.S. and Dewar, N. 2019. Data science ethical considerations: a systematic literature review and proposed project framework. Ethics and Information Technology. 21(3). pp.197-208.
Colombo, E., Mercorio, F. and Mezzanzanica, M. 2019. AI meets labor market: Exploring the link between automation and skills. Information Economics and Policy. 47. pp.27-37.
Osman, M.H., Kugele, S. and Shafaei, S. 2019. Run-Time Safety Monitoring Framework for AI-Based Systems: Automated Driving Cases. 2019 26th Asia-Pacific Software Engineering Conference (APSEC). pp. 442-449.
Hoffman, R.R., Mueller, S.T., Klein, G. and Litman, J. 2018. Metrics for explainable AI: Challenges and prospects. DOI: arXiv:1812.04608v2
Remember, at the center of any academic work, lies clarity and evidence. Should you need further assistance, do look up to our Computer Science Assignment Help
Proofreading and Editing$9.00Per Page
Consultation with Expert$35.00Per Hour
Live Session 1-on-1$40.00Per 30 min.
Doing your Assignment with our resources is simple, take Expert assistance to ensure HD Grades. Here you Go....
Min Wordcount should be 2000 Min deadline should be 3 days Min Order Cost will be USD 10 User Type is All Users Coupon can use Multiple