Skip to main content Skip to main navigation menu Skip to site footer
Articles
Published: 2024-06-29

Alcon , Director and Head of Global ERP Operations, Dallas-Fort Worth Metroplex, USA

Journal of Data Science and Information Technology

ISSN 2998-3592

Cloud-Driven Big Data Adaptation: Leveraging Distributed Systems

Authors

  • Prabhakar Vagvala Alcon , Director and Head of Global ERP Operations, Dallas-Fort Worth Metroplex, USA

Keywords

Big Data Adaption, Dematel method

Abstract

Big Data Adaption. The utilization of vast and diverse datasets for gaining insights, making informed decisions, and fostering innovation has become a paramount focus across various sectors. The exponential growth of data from the sources such as network media, sensors, and online platforms has prompted organizations to recognize the value of harnessing this data for meaningful purposes. By leveraging advanced technologies like “data analytics, machine learning, and artificial intelligence (AI),” businesses can extract valuable insights and patterns from the extensive pool of available data. This enables them to enhance operational efficiency, elevate customer experiences, and discover new avenues for growth and development. However, the effective adaption of Big Data presents challenges in terms of data quality, privacy, and scalability, which necessitate strategic approaches and solutions. Big Data adaptation has the potential to completely transform how we deal with an increasingly data-driven world. With the increasing amount of data being collected from various sources like social media, IoT devices and digital platforms, harnessing the potential of Big Data has become an urgent need for organizations across various sectors. You will gain critical insights into how organizations can effectively analyse, understand, and use massive amounts pf data to generate new ideas, improve their processes for decision-making, and gain competitive advantage through a study on big data adoption. To address concerns such as data quality, privacy, scalability, and ethical considerations, researchers must first understand the potential and difficulties associated with big data adaptation. By examining the significance of the study, we can use the big data, we prepare the way for revolutionary developments in industries like medical care, banking, advertising, and more, creating an information-driven future with enormous potential for economic and societal growth. The Dematel technique offers a special method for understanding and evaluating complicated linkages and dependencies, making it an important instrument in the arena of Big Data adaption. The Dematel technique provides an organised structure for examining the cause-and-effect correlations between all the elements involved in the adjustment procedure in the setting of Big Data adaption, where the sheer quantity and variety of data is enormous. Researchers may pinpoint critical elements that affect the effective application and usage of big data tactics, such as quality of data, infrastructure needs, organisational preparedness, and skill sets, by employing the Dematel process. This technique enables a thorough analysis of the interdependencies among multiple factors, assisting in the prioritisation and treatment of pressing problems that might affect the adoption of Big Data efforts. Researchers may obtain significant understanding into the complex relationships at play and create well-informed plans for using Big Data to its full potential in a variety of areas and sectors by applying that Dematel technique to the circumstances of Big Data adaption. In the end, the Dematel technique advances Big Data adaption by offering a methodical and systematic means for understanding the variety of this process of transformation. “Compatibility,”“Perceived benefits”, “Technology resources”, “Security, and privacy”, “Trialability.” At the conclusion, the Compatibility got the first place and it ranked one while the Trialability got the least rank and it ranked at the bottom by using Dematel Method.

Introduction

Successful training of machines in the context of Big Data adaptation requires effective data representation. Machine learning algorithms perform better when given well-designed data representations, enabling them to identify patterns in large datasets. To maximize the effectiveness of artificial intelligence in Big Data settings, it is vital to ensure a robust representation of the data [1]. Deep adaptive computational modeling has demonstrated remarkable potential in the field of Big Data adaptation for extracting significant characteristics from industrial information in an Internet of Things (IoT) ecosystem. However, this approach faces challenges in coping with the growth of industrial big data due to its primarily static nature. In commercial contexts, the ability to efficiently learn features from expanding and changing datasets is essential [2]. The exponential expansion of two related technologies, big data, and computer learning, is being driven by big data. Integration between these two industries seems to be an unavoidable result as businesses pursue intelligent automation of their decision-making processes. The fusion of “big data, analytics, and machine learning technology” illustrates the logical progression of these disciplines. By leveraging this connection, businesses can gain useful insights, make informed decisions, and maximize the value of their big data assets [3]. The proverb "information wealth" holds true today as data and information play significant driving roles. Networking sites heavily rely on user data for various purposes. Navigating this data-driven environment highlights the immense significance and influence of big data consumption, emphasizing the necessity for efficient management, ethical conduct, and responsible data usage [4]. Healthcare, legal analytics, and smart environments are just a few areas where big data technologies like Hadoop and Spark are becoming increasingly popular. These technologies enable businesses to manage enormous datasets and analyze them for insightful analysis and better decision-making [5]. The amount, speed, and diversity of protected event data are increasing, posing challenges for traditional security measures like intrusion prevention systems (IPS) and malware detection systems (MDS). To effectively analyze and identify complex cyber threats in real-time, big data approaches must be employed [6]. The exponential growth of data output from various sources, including smart cities, the Internet of Things (IoT), academic modeling, and big data models, has propelled Big Data to the forefront of research. The vast amount and complexity of gathered data present both benefits and challenges for businesses. Adapting to Big Data requires the development of innovative techniques and technological advancements to efficiently collect, process, analyze, and extract valuable insights from this vast and diverse information landscape [7]. The adoption of big data contributes to cost-effectiveness and sustainability by optimizing energy consumption, thereby satisfying service level agreements (SLAs) with customers while reducing operating expenses [8]. In recent years, the utilization of big data has significantly transformed business operations and decision-making processes. Various datasets provide valuable information, such as trends in computer application development and connections among systems. Businesses can innovate and make informed decisions with this data in the age of Big Data [9]. Big data adaptation advances astronaut well-being, facilitates scientific advancements, and enhances medical treatment in spaceflight [10]. The adoption of big data has revolutionized clinical research and information technology, enabling collaboration between biological sciences and medical organizations [11]. The adaption of big data is crucial for the development of automated production and Industry 4.0, as it allows the integration of Industrial Internet of Things (IIoT) tools and infrastructure into manufacturing facilities [12]. Power dynamics have shifted in the modern era, moving away from traditional variables like money, wealth, and territory towards information and expertise. The analysis and application of big data have become increasingly important considering this development. Organizations often struggle to fully utilize the endless possibilities of big data, which limits their advantages [13]. Big Data has become a significant and extremely relevant issue in the software sector. Prominent companies like Google, Facebook, and Amazon have found success, demonstrating the immense potential of Big Data in reshaping contemporary business paradigms. Beyond the software sector, several other industries recognize the significance of their data and how it has transformed their business operations. Big Data has long been essential for leveraging in the financial sector, providing insightful analysis, and supporting agile decision-making [14]. Big Data and technology are constantly evolving, and supply chain management faces challenges in demand management and manufacturing. While computer science and engineering make major contributions, further study is needed in production and operations management [15].

Materials and Method

Compatibility:In the world of big data adaption, compatibility refers to the capacity of various infrastructure, technologies, or databases to operate efficiently and effectively together. In order to facilitate seamless data exchange, processing, and examination, big data analytics necessitates making sure all of the infrastructure, tools, and methodology employed are compatible with one another. As it enables effective data exchange, interoperability, and collaboration across many platforms and sources, interoperability plays a crucial role in boosting the significance and insights gained from big data.

Perceived benefits:In the overall scheme of big data adaption, perceived benefits are the advantages that people or organisations think they can get from employing big data. Big data analytics is said to provide several advantages, such the capacity to make better decisions, run processes more efficiently, generate more money, receive greater consumer insights, and spot patterns and trends that were previously hard to spot. Big data adaption has a capacity to unearth insightful insights and stimulate innovation across a wide range of fields, resulting in improved results and competitive edge for both organisations and people.

Technology resources:Technical resources are necessary for a big data adaption to be successful. This consists of the facilities, equipment, and skills required to manage massive volumes of data. Processing architectures, storage options, analytics systems, and data management methods have been included. Big data may be utilised by companies to collect insightful data and make intelligent choices when they have the correct technical resources.

Security And Privacy:Adoption of big data requires a dedication to security and privacy. Sensitive information is protected by strict privacy laws and security procedures. The security and integrity of information are ensured through encryption, limitations on access, and secure data storage. Private data is protected by adherence to privacy laws. Building trust and keeping privacy and security first reduces risks.

Trialability:Trialability is a means to test and assess different approaches and advances in technology. includes the freedom to experiment with and assess different data analysis methods, tools, and platforms to see how well they perform in producing desired results. Companies may experiment and refine their big data plans with testability by making alterations and advancements based on actual testing. Businesses may use the potential of big data to develop, upgrade operations, and make wise decisions resulting in advantageous results by embracing experimentation.

Dematel Method:When applied in the context of adopting Big Data, the DEMATEL approach proves to be a powerful tool for structural modeling and acquiring collective knowledge. It enables a deeper understanding of the cause-and-effect relationships within a large data environment by visually displaying causal linkages between subsystems through a causal diagram [16]. This approach has proven effective in examining the interactions among different components within a complex system, providing a comprehensive understanding of the key driving variables and their impact on the system using charts and influence connection diagrams. Visual representation of the relationships, along with numerical representation of impact and direction, enhances the evaluation of the various components' influence and direction in a big data environment [17].To incorporate the weight of evidence in Big Data adaptation, a novel technique is proposed using this approach. This involves establishing a total-correlation matrix, calculating prominence and importance, and employing Dempster's rule to obtain a weighted average composite result. This approach effectively handles conflicting evidence while reducing computational complexity [18].In the realm of “supply chain management”, the fuzzy DEMATEL method is utilized to assess supplier performance and improve decision-making. Professionals in the electronics industry are provided with a questionnaire, enhancing the process with the aid of big data [19]. By utilizing this approach, complex networks can be mapped and conceptual representations created, facilitating the examination of interrelationships. Categorizing components into cause-and-effect groups simplifies the analysis, particularly in the context of Big Data, where it can be adapted to analyze complex systems with abundant data, providing valuable insights into how various factors interact [20].The DEMATEL method, developed by the Patel Memorial Institute, is an algorithmic approach for analyzing and solving complex problems. In the context of Big Data, it offers valuable insights for informed decision-making [21]. Incorporating fuzzy set theory into this approach, which is well-known for studying causal links between complex elements, further supports decision-making in fuzzy contexts. When combined with Big Data, this adaptation provides a potent and comprehensive method for modeling and analyzing complex structures [22]. Big Data adaptation, along with the DEMATEL technique, plays a significant role in addressing environmental concerns and promoting efficient supply chain management [23]. This approach focuses on analyzing the logical interactions and direct effect correlations between specific criteria using mathematical tools. By harnessing the power of big data, it facilitates understanding experts' perspectives on interconnected components and criteria, leading to viable solutions and well-informed decision-making through the visualization of structured models [24].This approach, developed in conjunction with big data adaptation, has its roots in the “Science and Human Affairs Program of the Battelle Memorial Institute”. It serves as a valuable tool for multi-criteria decision making [25]

    Result a nd Discussion

    Table 1. Big Data Adaption
    Compatibility Perceived benefits Technology resources Security and privacy Trialability Sum
    Compatibility 0 3 4 5 3 15
    Perceived benefits 3 0 4 2 3 12
    Technology resources 5 3 0 3 2 13
    Security and privacy 4 3 2 0 2 11
    Trialability 2 1 3 4 0 10

    Table 1 contains the Dataset of “Big Data Adaption”;“Compatibility, Perceived benefits, Technology resources, Security, and privacy, Trialability.”

    Figure 1.Big Data Adaption

    Figure 1 implies that the statistical values of the Big Data Adaption.

    Table 2. Normalisation of direct relation matrix
    Normalisation of direct relation matrix
      Compatibility Perceived benefits Technology resources Security and privacy Trialability
    Compatibility 0.0000 0.2000 0.2667 0.3333 0.2000
    Perceived benefits 0.2000 0.0000 0.2667 0.1333 0.2000
    Technology resources 0.3333 0.2000 0.0000 0.2000 0.1333
    Security and privacy 0.2667 0.2000 0.1333 0.0000 0.1333
    Trialability 0.1333 0.0667 0.2000 0.2667 0.0000

    Table 2 denotes the Normalisation of direct relation matrix” of Big Data Adaption

    Figure 2.Normalisation of direct relation matrix of Big Data Adaption

    Table 3. Identity Matrix
    I (Identity Matrix)”
    1 0 0 0 0
    0 1 0 0 0
    0 0 1 0 0
    0 0 0 1 0
    0 0 0 0 1

    Table 3 gives the Identity Matrix

    Table 4. Values of the Direct Relation Matrix
    Y
    0 0.2 0.266667 0.333333 0.2
    0.2 0 0.266667 0.133333 0.2
    0.333333 0.2 0 0.2 0.133333
    0.266667 0.2 0.133333 0 0.133333
    0.133333 0.066667 0.2 0.266667 0
    Table 5. The values of (I-V)
    I-Y
    1 -0.2 -0.26667 -0.33333 -0.2
    -0.2 1 -0.26667 -0.13333 -0.2
    -0.33333 -0.2 1 -0.2 -0.13333
    -0.26667 -0.2 -0.13333 1 -0.13333
    -0.13333 -0.06667 -0.2 -0.26667 1
    Table 6. Values of (I)
    (I-Y)-1
    2.0463 0.9609 1.1595 1.2896 0.9280
    1.0413 1.6623 1.0140 0.9869 0.8075
    1.2109 0.8950 1.8744 1.1165 0.8200
    1.0299 0.7919 0.8726 1.8162 0.7229
    0.8591 0.6291 0.8298 0.9454 1.5343

    Table 6 shows theI-Y)-1 Here we used the Minverse function to find the values for it.

    Table 7. Total Relation Matrix (T)
    Total Relation Matrix
      Compatibility Perceived benefits Technology resources Security and privacy Trialability
    Compatibility 0 0.2 0.266667 0.333333 0.2
    Perceived benefits 0.2 0 0.266667 0.133333 0.2
    Technology resources 0.333333 0.2 0 0.2 0.133333
    Security and privacy 0.266667 0.2 0.133333 0 0.133333
    Trialability 0.133333 0.066667 0.2 0.266667 0

    Table 7 denotes the “Total Relation Matrix” of the “Big Data Adaption of Compatibility, Perceived benefits, Technology resources, Security, and privacy, Trialability.”

    Figure3. “Total Relation Matrix”

    Figure 3 impliesthe values of the Total Relation Matrix of big data adaption,

    Table 8. Total Relation Matrix T Ri, Ci
      Ri Ci
    Compatibility 7.6018 5.1875
    Perceived benefits 6.3064 3.9393
    Technology resources 6.8534 4.7503
    Security and privacy 5.7725 5.1545
    Trialability 5.2774 3.8127

    Table 8 shows the “Total Relation Matrix T Ri, Ci”. These values are the “Sum of Rows (Ri) and Sum of columns (Ci).”

    Figure 8. Total Relation Matrix T Ri, Ci.

    Figure 8 shows that the Compatibility is much higher than other both in Ri and Ci

    Table 9. Calculation of (Ri+Ci) and (Ri-Ci) to get the cause and effect
      Ri + Ci Ri Ci RANK Identity
    Compatibility 12.7894 2.4143 1 Cause
    Perceived benefits 10.2457 2.3671 4 Cause
    Technology resources 11.6037 2.1031 2 Cause
    Security and privacy 10.9271 0.6180 3 Cause
    Trialability 9.0900 1.4647 5 Cause
    Table 10. Rank of the Parameters of Big Data Adaption
    Rank
    Compatibility 1
    Perceived benefits 4
    Technology resources 2
    Security and privacy 3
    Trialability 5

    Table 10 shows the Ranks of the Parameters of the Big Data Adaption. Here the Compatibility is placed first.

    Figure 4. RANK of the Big Data Adaption

    Figure 4 shows the “Rank of the parameters of “Big Data Adaption” are“Compatibility, Perceived benefits, Technology resources, Security, and privacy, Trialability.” Here the Compatibility is placed at the top and the Trialability is placed at the bottom by using Dematel method.

    Table 11. Alpha value of Big Data Adaption

    Calculate The average of the matrix and its threshold value (alpha)
    Alpha 0.9138

    C onclusion

    Big Data Adaption. The utilization of vast and diverse datasets for gaining insights, making informed decisions, and fostering innovation has become a paramount focus across various sectors. The exponential growth of data from the sources such as network media, sensors, and online platforms has prompted organizations to recognize the value of harnessing this data for meaningful purposes. By leveraging advanced technologies like “data analytics, machine learning, and artificial intelligence (AI),” businesses can extract valuable insights and patterns from the extensive pool of available data. This enables them to enhance operational efficiency, elevate customer experiences, and discover new avenues for growth and development.Successful training of machines in the context of Big Data adaptation requires effective data representation. Machine learning algorithms perform better when given well-designed data representations, enabling them to identify patterns in large datasets. To maximize the effectiveness of artificial intelligence in Big Data settings, it is vital to ensure a robust representation of the data. Big Data has long been essential for leveraging in the financial sector, providing insightful analysis, and supporting agile decision-making.Big Data and technology are constantly evolving, and supply chain management faces challenges in demand management and manufacturing. While computer science and engineering make major contributions, further study is needed in production and operations management., the DEMATEL approach proves to be a powerful tool for structural modeling and acquiring collective knowledge. It enables a deeper understanding of the cause-and-effect relationships within a large data environment by visually displaying causal linkages between subsystems through a causal diagram. Here at the conclusion, Compatibility is placed at the top and the Trialability is placed at the bottom by using Dematel method.

    References

    1. Najafabadi, Maryam M., Flavio Villanustre, Taghi M. Khoshgoftaar, Naeem Seliya, Randall Wald, and Edin Muharemagic. "Deep learning applications and challenges in big data analytics." Journal of big data 2, no. 1 (2015): 1-21.
    2. Li, Peng, Zhikui Chen, Laurence Tianruo Yang, Jing Gao, Qingchen Zhang, and M. Jamal Deen. "An incremental deep convolutional computation model for feature learning on industrial big data." IEEE Transactions on Industrial Informatics 15, no. 3 (2018): 1341-1349.
    3. Sassi, Imad, Sara Ouaftouh, and Samir Anter. "Adaptation of classical machine learning algorithms to big data context: problems and challenges: Case study: Hidden markov models under spark." In 2019 1st International Conference on Smart Systems and Data Science (ICSSD), pp. 1-7. IEEE, 2019.
    4. Balakrishnan, Nagaraj, Arunkumar Rajendran, and Karthigaikumar Palanivel. "Meticulous fuzzy convolution C means for optimized big data analytics: adaptation towards deep learning." International Journal of Machine Learning and Cybernetics 10 (2019): 3575-3586.
    5. Ullah, Faheem, and Muhammad Ali Babar. "An architecture-driven adaptation approach for big data cyber security analytics." In 2019 IEEE International Conference on Software Architecture (ICSA), pp. 41-50. IEEE, 2019.
    6. Ullah, Faheem, and M. Ali Babar. "QuickAdapt: scalable adaptation for Big Data cyber security analytics." In 2019 24th international conference on engineering of complex computer systems (ICECCS), pp. 81-86. IEEE, 2019.
    7. Sinaeepourfard, Amir, Jordi Garcia, Xavier Masip-Bruin, and Eva Marín-Torder. "Towards a comprehensive data lifecycle model for big data environments." In Proceedings of the 3rd IEEE/ACM International Conference on Big Data Computing, Applications and Technologies, pp. 100-106. 2016.
    8. Casalicchio, Emiliano, Lars Lundberg, and Sogand Shirinbad. "An energy-aware adaptation model for big data platforms." In 2016 IEEE International Conference on Autonomic Computing (ICAC), pp. 349-350. IEEE, 2016.
    9. Chang, Bao Rong, Hsiu-Fen Tsai, and Po-Hao Liao. "Applying intelligent data traffic adaptation to high-performance multiple big data analytics platforms." Computers & Electrical Engineering 70 (2018): 998-1018.
    10. Prysyazhnyuk, Anastasiia, Carolyn McGregor, Evgenii Bersenev, and A. V. Slonov. "Investigation of adaptation mechanisms during five-day dry immersion utilizing big-data analytics." In 2018 IEEE Life Sciences Conference (LSC), pp. 247-250. IEEE, 2018.
    11. Naqvi, Muhammad Raza, Muhammad Arfan Jaffar, Muhammad Aslam, Syed Khuram Shahzad, Muhammad Waseem Iqbal, and Amjad Farooq. "Importance of big data in precision and personalized medicine." In 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA), pp. 1-6. IEEE, 2020.
    12. Lin, Chun-Cheng, Der-Jiunn Deng, Chin-Hung Kuo, and Linnan Chen. "Concept drift detection and adaption in big imbalance industrial IoT data using an ensemble learning method of offline classifiers." IEEE Access 7 (2019): 56198-56207.
    13. Al-Rahmi, Waleed Mugahed, Noraffandy Yahaya, Ahmed A. Aldraiweesh, Uthman Alturki, Mahdi M. Alamri, Muhammad Sukri Bin Saud, Yusri Bin Kamin, Abdulmajeed A. Aljeraiwi, and Omar Abdulrahman Alhamed. "Big data adoption and knowledge management sharing: An empirical investigation on their adoption and sustainability as a purpose of education." Ieee Access 7 (2019): 47245-47258.
    14. Eichelberger, Holger, and Klaus Schmid. "Resource-optimizing adaptation for big data applications." In Proceedings of the 18th International Software Product Line Conference: Companion Volume for Workshops, Demonstrations and Tools-Volume 2, pp. 10-11. 2014.
    15. Feng, Qi, and J. George Shanthikumar. "How research in production and operations management may evolve in the era of big data." Production and Operations Management 27, no. 9 (2018): 1670-1684.
    16. Wu, Wei-Wen, and Yu-Ting Lee. "Developing global managers’ competencies using the fuzzy DEMATEL method." Expert systems with applications 32, no. 2 (2007): 499-507.
    17. Du, Yuan-Wei, and Xiao-Xue Li. "Hierarchical DEMATEL method for complex systems." Expert Systems with Applications 167 (2021): 113871.
    18. Zhang, Weiquan, and Yong Deng. "Combining conflicting evidence using the DEMATEL method." Soft computing 23 (2019): 8207-8216.
    19. Chang, Betty, Chih-Wei Chang, and Chih-Hung Wu. "Fuzzy DEMATEL method for developing supplier selection criteria." Expert systems with Applications 38, no. 3 (2011): 1850-1858.
    20. Yazdi, Mohammad, Faisal Khan, Rouzbeh Abbassi, and Risza Rusli. "Improved DEMATEL methodology for effective safety management decision-making." Safety science 127 (2020): 104705.
    21. Shieh, Jiunn-I., and Hsin-Hung Wu. "Measures of consistency for DEMATEL method." Communications in Statistics-Simulation and Computation 45, no. 3 (2016): 781-790.
    22. Wu, Wei-Wen. "Segmenting critical factors for successful knowledge management implementation using the fuzzy DEMATEL method." Applied Soft Computing 12, no. 1 (2012): 527-535.
    23. Chang, Kuei-Hu, and Ching-Hsue Cheng. "Evaluating the risk of failure using the fuzzy OWA and DEMATEL method." Journal of Intelligent Manufacturing 22, no. 2 (2011): 113.
    24. Lin, Kuo-Ping, Ming-Lang Tseng, and Ping-Feng Pai. "Sustainable supply chain management using approximate fuzzy DEMATEL method." Resources, Conservation and Recycling 128 (2018): 134-142.
    25. Du, Yuan-Wei, and Wen Zhou. "New improved DEMATEL method based on both subjective experience and objective data." Engineering Applications of Artificial Intelligence 83 (2019): 57-71.

Make a Submission

Current Issue

Browse

Published

2024-06-29

How to Cite

Vagvala, P. . (2024). Cloud-Driven Big Data Adaptation: Leveraging Distributed Systems. Journal of Data Science and Information Technology, 1(1), 32-41. Retrieved from https://jdit.sciforce.org/JDIT/article/view/241