Mental Health: Global Challenges Journal
https://www.sciendo.com/journal/MHGCJ
ISSN 2612-2138
including participants from healthcare and
mental health backgrounds as well as
humanitarian aid workers from the United Nations
and international non-governmental
organizations (NGOs), journalists, and human
rights lawyers (Liu et al, 2016).
While each course training results were
evaluated by HMS, HPRT conducted a
comprehensive evaluation of a three-
consecutive year cohort from 2011 to 2013,
(n=155 participants). Although completion of the
GMH course has been almost universally
successful with fewer than ten participants
dropping out over thirteen years (primarily due to
illness), an extensive evaluation to determine its
impact on participants was undertaken. Mental
health knowledge including learning the major
dimensions of the GMHAP, confidence in
performing medical and psychiatric procedures
with highly traumatized patients, families and
communities, self-care, and cultural
competence were assessed. The major findings
of this evaluation are presented in this report.
By 2020, the GMH blended leaning course was in
its 14th year with over 1,000 alumni working in
over eighty-five countries, before pivoting to
virtual only programming in Spring of 2021 due to
the COVID-19 pandemic. Regardless, the present
evaluation reassures us that the request of the
World’s Ministries of Health in 2004 was met
through a six month culturally sensitive, evidence-
based accredited CME blended learning COP
model. In this study, we evaluated confidence
level change before and after the GMH course
among the 155 participants (Smith et al, 1998;
Wickstrom, Kelley, Keyserling et al, 2000;
Wickstrom, Kolar, Keyserling et al, 2000;
Henderson et al, 2008; Henderson et al, 2005;
Borba et al, 2015).
Purpose
This evaluation study contributes to the
emerging evidence that CME activities can use
innovative interactive approaches for training
health care practitioners and humanitarian
aid/human rights workers globally in the care of
highly traumatized patients and communities.
Methodology
Study Sample
There were 155 participants in the training
program across the three years from 2011 to
2013 (N2011=39; N2012=57; N2013=59).
Evaluation Approach
The participants’ confidence levels were
evaluated by a measure of competence on
performance using the Smith, et al. approach
(Smith et al, 1998; Wickstrom, Kelley, Keyserling et
al, 2000; Wickstrom, Kolar, Keyserling et al, 2000;
Henderson et al, 2008; Henderson et al, 2005;
Borba et al, 2015). Considerations for the level of
health practitioners’ confidence is closely
correlated with their actual performance, have
been demonstrated.
Demographics (gender, age, occupation,
and specialty) and confidence level were
collected at the beginning of the training and
end of the training. (See Table 1)
First, participants’ confidence was measured
on implementing the GMHAP at the beginning of
the training (baseline) and the end of the
training(post-training). A six-point Likert scale (1 =
not confident, 2 = slightly confident, 3 =
somewhat confident, 4 = confident, 5 = very
confident, 6 = extremely confident) for each
question was used to measure their level of
confidence. We measured the confidence level
on nineteen aspects: policy/legislation, financing,
science-based mental health services,
multidisciplinary education, role of international
agencies, linkages to economic development,
human rights, research, evaluation, and ethics
(Details about each category can be found in
the Appendix)
We asked sixty-four (64) questions about their
confidence towards multiple aspects of medical
and psychiatric treatment at the beginning of the
training (baseline) and the end of the training
(post-training). Similar to the above, a six-point
Likert scale for each question was used to
measure their level of confidence. The 64
confidence questions were summarized into 9
different categories: treating trauma (N = 15),
psychiatric diagnosis (N = 6), assist patient care
and social issue (N = 11), prescribe psychotropic
med (N = 1), self-care (N = 3), understanding
culture impact (N = 8), collaboration (N = 1),
policy financing (N = 1), and teaching research
evaluation (N = 11). Each category of
confidence was measured by a set of questions
from the questionnaire. We calculated the score
of each category by summing the scores of
questions in the category. Because the number
of questions in different categories of confidence
is not the same, the total confidence scores of
the nine categories are different. The details
about which questions are included in each
category are in the Appendix. The total score for
each category equals to six times the number of
questions in the category.
16