Based on previous evidence-based researches and teaching experience, our team conducted literature and book review, and summarized 4 requirements, 1) effect measure calculation and conversion, 2) registration of evidence-based research, 3) evidence-based research database and 4) quality evaluation tools and reporting guidelines. We developed an online platform of evidence-based medicine research helper using the front-end and back-end technology, which can be accessed using www.ebm-helper.cn. Currently, the online tool has included 46 scenarios for effect measure calculation and conversion, introduction of 7 evidence-based research registration platforms, 26 commonly used databases for evidence-based research and 29 quality evaluation tools and reporting guidelines. This online tool can help researchers to solve specific problems encountered in different stages of evidence-based medicine research. Promoting the application of this platform in evidence-based medicine will help researchers to use the tool scientifically and improve research efficiency.
In recent years, an increasing amount of systematic reviews have been published; however, few reviews adequately considered and reported details of interventions, which not only limited the usability of systematic reviews but also wasted resource. In order to improve reporting of intervention details in systematic reviews, BMJ recently published recommendations. This paper interprets the recommendations to improve usability of systematic reviews.
Digital health technology implementation has grown rapidly in recent years. To standardize the quality of digital health implementation research and increase the transparency and integrity of reporting, Perrin published iCHECK-DH: guidelines and checklist for the reporting on digital health implementations in 2023. This article interprets the contents of the list with a view to improving the reporting quality of digital implementation studies to develop more effective digital health interventions and achieve better health outcomes.
Reporting standard system of clinical research of traditional Chinese medicine (TCM) is composed of ten reporting standards in the areas of the design and preparation of TCM clinical researches, researches of different types, randomized controlled trials of various interventions, systematic reviews of the trials and the translation of research evidence, which were developed by different research groups. This article systematically analyzed the current reporting standards of clinical research of TCM. Achievements and problems were found in the review so as to provide insights for the ongoing reporting standards and to assist the construction of the reporting standard system so as to improve the reporting quality of clinical researches of TCM.
The standards for reporting of diagnostic accuracy for studies in journal or conference abstracts (STARD for Abstracts) was developed for guiding the reporting of abstracts of diagnostic accuracy studies, which was published in BMJ in August 2017. The study mainly introduced and interpreted the items of STARD for Abstracts, in order to help domestic researchers to perform and report the abstracts of diagnostic accuracy studies by STARD for Abstracts.
Objective To evaluate the reporting quality of randomized controlled trials (RCTs) in seven military medical journals. Methods Seven journals in 2007, including Medical Journal of Chinese People’s Liberation Army, Journal of South Medical University, Journal of Second Military Medical University, Journal of Third Military Medical University, Journal of Fourth Military Medical University, Bulletin of the Academy of Military of Medical Sciences and Academic Journal of PLA Postgraduate Medical School, were handsearched. We identified RCTs labeled “random” and assessed the quality of these reports using the Consolidated Standards for Reporting of Trials (CONSORT) statement. Results We identified 99 RCTs, but found an incorrect randomized method was used in 6 RCTs. According to the items in the CONSORT statement in 93 RCTs, 62 (66.7%) RCTs described baseline demographic and clinical characteristics in each group. Sixteen (17.2%) RCTs mentioned the method of random sequence generation, with 5 (5.4%) using a computer allocation. Only 1 RCT had adequate allocation concealment. Only 9 (9.7%) RCTs used blinding, with 2 mentioning blinding, 1 using single blinding and 6 described as double-blind (2 were correct). Zero (0%) reported the sample size calculation and 1 RCT reported the intention-to-treat (ITT) analysis. Conclusion The reporting quality of RCTs in seven journals is poor. The CONSORT statement should be used to standardize the reporting of RCTs.
In recent years, the number of randomized controlled trials using cohorts and routinely collected data (e.g., electronic health records, administrative databases, and health registries) has increased. Such trials can ease the challenges of conducting research and save cost and time. Accordingly, to standardize such trials and increase the transparency and completeness of research reports, an international panel of experts developed the CONSORT-ROUTINE (2021) reporting guideline. The reporting guideline was published in 2021 in the BMJ. To help understand and formally apply the reporting guideline and improve the overall quality of this type of study, the present paper introduced and interpreted the development process and reporting checklist of the CONSORT-ROUTINE.
With the encouragement of national policy on drug and medical device innovation, multi-center clinical trials and multi-regional clinical trials are facing an unprecedented opportunity in China. Trials with a multi-center design are far more common at present than before. However, it should be recognized there still exists shortcomings in current multi-center trials. In this paper, we summarize the problems and challenges and provide corresponding resolutions with the aim to reduce heterogeneity between study centers and avoid excessive center effects in treatment. It is urgent to develop design, implementation and reporting guidelines to improve the overall quality of multi-center clinical trials.
Objective To investigate the health technology assessment reports, analyze publication characteristics and report quality, and explore hot topics in health technology assessment. Methods Web of Science and CNKI databases were searched to collect complete health technology assessment reports from inception to January 2023. SPSS 26.0 software was used to analyze the publication journals, countries, number of authors, assessment types and assessment contents of the assessment reports. The report quality was assessed based on International Network of Agencies for Health Technology Assessment (INAHTA) report criteria (2007 edition). VOSviewer 1.6.11 was used to analyze keywords clustering. Results A total of 216 papers were included, with 158 published by Chinese authors, and a rapid growth trend in the number of reports over past four years. The rate of reports on health technology social adaptability assessment was only 17.13%. Among the Chinese reports, 25 were general health technology assessments, 35 were rapid assessments, and 3 were mini assessments. Among the English reports, 4 were rapid assessments, and 54 were regular healthcare technology assessments. For the 14 items in the INAHTA reporting criteria, the reporting rates were high for the brief summary (98.61%), problem description (94.91%), and results discussion entries (97.69%). However, the reporting rates were low for criteria such as personnel responsibilities, conflict of interest statements, and peer review statements, at 31.94%, 19.44%, and 3.24% respectively. English literature generally exhibited higher report quality. Conclusion In recent years, the volume of health technology assessment reports in China has been increasing, with developments in assessment types and application fields. However, there are also problems with standardization of reporting.