Guest Editorial From Human–Robot to Human–Machine Collaboration
With the implementation of the Industry 4.0 concept, technologies such as artificial intelligence (AI), robotics, the industrial internet of things (IIoT), and cloud computing are gradually being introduced into production environments to manage the growing complexity of manufacturing. However, due to constraints such as limited engineering capabilities, high variability in operating environments, significant costs, and maintenance challenges, many manufacturing tasks remain manual or are extremely difficult to automate [1]. To address these challenges, the academic community has proposed the concept of human–robot collaboration (HRC) [2], re-examining the interactions and working relationships between humans and machines in manufacturing. This perspective shifts away from the traditional notion of replacement and elimination, emphasizing the importance of cooperation and complementarity. It also highlights the importance of human-centered system design, which has become a core value of the emerging Industry 5.0 [3].
Indeed, humans and robots complement each other by leveraging their respective strengths. Humans can adapt to new situations, make judgments based on incomplete data, and apply experience-based reasoning. They also excel in abstract thinking, troubleshooting unexpected issues, and interpreting ambiguous information. Physically, with dexterity and fine motor skills, humans can handle complex manual tasks that require precision, such as the assembly of delicate components, as well as tasks that demand extensive experience and craftsmanship, such as scraping. On the other hand, robots and machines perform repetitive tasks with high accuracy, minimizing errors and ensuring consistent quality. They can operate continuously without fatigue and handle physically demanding tasks beyond human capabilities. Their ability to operate in hazardous environments, such as extreme temperatures, toxic atmospheres, or high-radiation areas, reduces human exposure to risks. Together, humans and robots enhance productivity, safety, and efficiency in manufacturing. For this purpose, collaborative robots (or cobots) have emerged as an effective solution for HRC in manufacturing environments, offering advantages over traditional industrial robots.
In fact, human–robot collaboration has evolved beyond its literal meaning to include the concept of collaboration between human intelligence and AI [4], also referred to as human-AI synergy [5]. This is typically realized by combining human cognitive abilities with AI’s computational power to improve decision-making quality, efficiency, and innovation. AI processes vast amounts of data and generates insights, while humans apply reasoning, intuition, and ethical considerations to make final decisions in areas such as healthcare and finance. In design fields, AI-generated artwork, concepts, or drafts serve as a starting point or stimulus for designers to refine or modify them based on human creativity and preferences [6]. In manufacturing industries, AI enhances human intelligence by providing real-time insights and automating repetitive tasks for quality control and inspection. AI-powered vision systems inspect wafers in semiconductor manufacturing for microscopic defects and identify surface defects in automobile manufacturing, while human engineers verify critical anomalies, assess ambiguous cases, and determine rework or process adjustments [7]. Another emerging application is AI-enabled augmented reality (AR) for assisting workers in the field. An AR headset highlights faulty components or modules that may require maintenance, while human technicians leverage their expertise to troubleshoot and repair the machinery [8].
A Broader Perspective
Therefore, HRC can be expanded into human–machine collaboration to acknowledge that humans and machines, including AI systems and robotics, complement each other in cognitive and physical capabilities. The cognitive aspect of human–machine collaboration focuses on how humans and AI/machines work together to process information, make decisions, and solve problems. Research issues in cognitive collaboration include explainability, trust, and shared situational awareness. AI’s black-box nature makes interpretation challenging, requiring transparency to build human trust and enable effective collaboration [9]. Optimizing human-AI complementarity is crucial to balance AI’s data-driven reasoning with human intuition and adaptability. In addition, the physical aspect of human–machine collaboration focuses on how humans and robots interact in shared workspaces, combining physical strength, precision, and dexterity. Challenges in physical collaboration between humans and machines include safety risks, ergonomics, and task allocation. Ensuring safe interaction in shared spaces requires effective collision avoidance, emergency stops, and protective measures [10]. Designing intuitive and ergonomic interfaces is essential to minimize fatigue, reduce the risk of occupational injuries, and enhance user comfort [11]. Another key challenge is optimizing task allocation, and balancing automation and human intervention to improve efficiency while maintaining flexibility in dynamic environments [12]. The proper design of individual and joint tasks for humans and cobots can maximize the collaborative potential between humans and cobots, fostering a secure and smooth workflow within a shared workspace. The capabilities and relative positions of each agent should be carefully considered to minimize collision and ergonomic risks.
Human–machine collaboration allows robots to operate alongside human partners in manufacturing, automating physically demanding tasks while incorporating human cognition and decision-making. However, its industrial adoption remains limited due to challenges in understanding, communication, trust-building, interface effectiveness, task allocation, and optimization. Recent advancements in generative AI, extended reality, IIoT, advanced sensing, digital twins, and data analytics offer new opportunities to address these challenges. To advance this field, this special issue aims to explore and compile cutting-edge research and innovations in human–machine collaboration within the framework of Industry 5.0. Six high-quality papers have been rigorously reviewed and accepted for the special issue, each of which is summarized below.
Overview of Technical Contributions
Dong et al. envision HRSM (human–robot symbiotic manufacturing) as a future manufacturing paradigm that prioritizes human-centricity, generalization, and seamless integration. They proposed a pyramid architecture model to characterize human needs in the manufacturing industry, consisting of five levels: safety, health, assistance, belonging, and self-actualization. The emergence of large language models (LLMs), recognized for their advanced reasoning capabilities and extensive knowledge base, has generated interest in their potential to improve interaction, collaboration, and execution in human–robot collaboration systems. Therefore, the authors identified the key objectives of HRSM, defined embodied intelligence, and examined how LLMs could support these objectives by enhancing system adaptability and the decision-making process. They also discussed the challenges associated with integrating LLMs into HRSM, including reliability, real-time adaptability, and ethical considerations. By providing insights into the role of LLMs in HRSM, this article contributes to the ongoing development of embodied intelligence-enabled manufacturing and encourages further exploration in this field.
Mobile manipulators offer greater flexibility than traditional fixed robots by synchronizing base and arm movements, making them ideal for tasks in intelligent manufacturing. The stability of mobile manipulators is critical in the complex environments of modern factories. Tao et al. proposed a novel dynamic tip-over avoidance method based on the extended zero moment point (ZMP) algorithm to enhance stability in such environments. This method integrates the extended ZMP with redundancy features to dynamically adjust motion allocation based on environmental constraints, preventing tipping and improving operational safety. The method was validated through simulations and experiments using the MR2000 + FR3 mobile manipulator. Results showed that the extended ZMP remained within safe limits, reducing displacement by 0.01 m and lowering average joint velocity by 28.3%. This research improves the safety of HRC by enhancing stability and minimizing abrupt velocity changes in mobile robots.
AR has been applied to facilitate human–robot collaboration in manufacturing. It enhances real-time communication and interaction between humans and robots as a new paradigm of interface. Chu et al. study AR-assisted robot programming, addressing the limitations of traditional PbD (programming by demonstration) by evaluating different input modality designs for common planning tasks. An experimental study compared hand gestures, eye gaze, head movement, and voice input for tasks such as pointing, tracing, 1D rotation, 3D rotation, and switch state by assessing their accuracy, precision, completion time, and usability. Results indicated that hand gesture-based modalities reduced deviation and physical workload but lacked the accuracy needed for precise applications, possibly due to the limited depth cues in AR displays. Head movement and eye gaze inputs were highly sensitive to physiological variations and received lower user ratings. Moreover, gizmo-based modalities offered greater accuracy for rotational adjustments with real-time visual feedback, though at the cost of longer input durations. A test scenario of a pick-and-place robotic operation verified the effectiveness of the modality designs proposed in this study and reinforced the cross-comparison results. The experimental findings offer empirical insights to support the development of more effective AR interaction modalities for industrial robot programming.
Precise prediction of human body movements is essential for ensuring safety in HRC. Traditional point-by-point hand motion recognition is limited by prediction errors. To address this limitation, Liao et al. applied three machine learning techniques—long short-term memory, gated recurrent unit, and Bayesian neural network (BNN)—combined with Bagging and Monte Carlo dropout (MCD) to predict movement ranges during consumer electronics disassembly. A case study using inertial measurement unit (IMU) sensor data from desktop computer disassembly was conducted, with Bagging and MCD procedures performed 30 times to generate ensemble predictions. The results indicated that the optimal model for point prediction varied among individuals, while BNN-MCD outperformed the others in defining movement bounds. By integrating a probabilistic BNN model with MCD, this approach effectively accounted for uncertainty. The findings suggest that while all three models are suitable for point prediction, BNN-MCD is the most reliable for estimating possible hand movements.
Manjunatha et al. studied the effectiveness of brain activity monitoring in predicting the instability index during physical human–robot interaction and examined whether early Electroencephalography (EEG) signals can detect instability before it becomes observable in force data. As a reference measure, they conducted frequency domain analysis of force data using a moving window. Elastic net regression using power spectral density and connectivity features of EEG data, along with a deep convolutional neural network trained on artifact-free EEG data, was employed to predict the instability index. Both models demonstrated reliable estimation performance, although the quality varied across subjects. The main factor contributing to this variability is differences in motor control skills, which affect brain activity. This issue could potentially be addressed by training personalized regression models. Furthermore, while the infrared (IR) index serves as a measure of instability, reliably distinguishing between stable and unstable states is more critical in practice than accurately predicting its numerical value. Such classification would be highly beneficial for real-time adjustments of admittance control parameters.
When humans and robots operate in close proximity, a fundamental requirement for safe HRC is the ability to understand and predict human movements and intentions. However, existing research lacks solutions for predicting both simultaneously. Zhang et al. developed a multitask learning framework capable of simultaneously predicting a person’s intentions and movement trajectory. In this framework, four encoder architectures were tested, and both supervised and unsupervised methods were explored to analyze movement data, with a focus on capturing its timing. Collaborative assembly experiments were conducted to evaluate the effectiveness of the framework based on process performance. The results indicated that the framework accurately predicted both intentions and movements. Additionally, the latent representations were analyzed to assess how well the proposed methods captured key features, and detailed visualizations of human joint positions under different speed settings were provided. The inference times of different encoder designs were also estimated to compare their efficiency.
Conclusion
In summary, human–machine collaboration integrates AI’s computational intelligence and robotic automation to enhance both cognitive and physical tasks in design and manufacturing. Future advancements will emphasize human-centered AI, ensuring transparency, adaptability, and trust in decision-making. Advanced cobots with improved sensing and learning capabilities will enable seamless cooperation in shared workspaces. Hybrid decision-making will integrate human intuition and reasoning under uncertainty with AI’s computational power and vast datasets to tackle highly complex industrial problems efficiently and effectively.
The guest editors would like to thank all contributing authors for their outstanding work. Sincere appreciation is also extended to the reviewers for their valuable time and effort in providing constructive feedback throughout the review process. Special recognition goes to Ms. Regina Neequaye for her editorial assistance and timely reminders. Finally, gratitude is expressed to Prof. Yan Wang, the Editor-in-Chief of JCISE, for his support and for providing the opportunity to organize this special issue.
References
- Xu, X., Lu, Y., Vogel-Heuser, B., and Wang, L., 2021, “Industry 4.0 and Industry 5.0-Inception, Conception and Perception, “ Manuf. Syst., 61, pp.530-535.
- Nahavandi, S., 2019, “Industry 5.0-A Human-Centric Solution,” Sustainability, 11(16), p.4371.
- Li, S., Wang, R., Zheng, P., and Wang, L., 2021, “Towards Proatctive Human-Robot Collaboration: A Foreseeable Cognitive Manufacturing Paradigm, “J. Manuf. Syst., 60, pp. 547-552.
- Senoner, J., Schallmoser, S., Kratzwald, B., Feuerrigel, S., and Netland, T., 2024, “Explainable AI Improves Task Performance in Human-AI Collaboration,” Rep, 14(1), P.31150.
- Bao, Y., Gong, W., and Yang, K., 2023, “A Literature Review of Human-AI Synergy in Decision Making: From the Perspective of Affordance Actualization Theory,” Systems, 11(9), p.442.
- Kim, J., and Maher, M.L., 2023, “The Effect of AI-Based Inspiration on Human Design Ideation,” J. Des. Creativity Innov., 11(2), pp. 81-98.
- Chu, C. H., Weng, C.Y., and Chen, Y. T., 2024, “Enhancing Manual Inspection in Semiconductor Manufacturing With Integrated Augmented Reality Solutions,” Manuf. Syst., 77, pp. 933-945.
- Runji, J. M., Lee, Y.J., and Chu, C.H., 2022, “User Requirements Analysis on Augmented Reality-Based Maintenance in Manufacturing,” ASME J. Comput. Inf. Sci. Eng., 22(5) p.050901.
- Lee, M. H., and Chew, C. J., 2023, “Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision Making,” ACM Hum. Comput. Interact., 7(CSCW2), pp.1-22.
- Müller, R., Vette, M., and Greened, A., 2017, “ Skill-Based Dynamic Task Allocation in Human-Robot-Cooperation With the Example of Welding Application,” Manuf., 11, pp. 13-21.
- Millot, P., (ed.), 2014, Designing Human-Machine Cooperation Systems, John Wiley & Sons.
- Petzoldt, C., Harms, M., and Frietag, M., 2023, “Review of Task Allocation for Human–Robot Collaboration in Assembly,” Int. J. Comput. Integr. Manuf., 36(11), pp.1675-1715.