Artificial intelligence-enhanced brain-computer interfaces (BCI) are expected to significantly improve the performance of traditional BCIs in multiple aspects, including usability, user experience, and user satisfaction, particularly in terms of intelligence. However, such AI-integrated or AI-based BCI systems may introduce new ethical issues. This paper first evaluated the potential of AI technology, especially deep learning, in enhancing the performance of BCI systems, including improving decoding accuracy, information transfer rate, real-time performance, and adaptability. Building on this, it was considered that AI-enhanced BCI systems might introduce new or more severe ethical issues compared to traditional BCI systems. These include the possibility of making users’ intentions and behaviors more predictable and manipulable, as well as the increased likelihood of technological abuse. The discussion also addressed measures to mitigate the ethical risks associated with these issues. It is hoped that this paper will promote a deeper understanding and reflection on the ethical risks and corresponding regulations of AI-enhanced BCIs.
Neurofeedback transforms real-time brain activity features into multimodal feedback to guide self-regulation of brain function, showing potential applications in neuropsychiatric treatment and cognitive enhancement. However, its use entails ethical risks including cognitive autonomy, personal identity integrity, safety and efficacy, privacy protection, and the safeguarding of vulnerable populations, with informed consent challenges being particularly pronounced in implicit neurofeedback. Based on these risks, this paper proposes establishing an ethical evaluation framework for neurofeedback, promoting ethics-embedded design, and strengthening international cooperation and public education, emphasizing responsible innovation to align technological development with ethical safeguards.