I thought you might find it useful to have additional input regarding the Week 1 Discussion (formatting and expectations). In the following, I provide a “translation” of the instructions created by the Course Developer. I then offer hints and access to additional needed resources.
Original instructions are in black, bold text. I go through the original instructions in a sequential manner, inserting hints and resources.
Week 1 Discussion Thread Instructions
Using the textbook as your guide (this means citing it) (1) and a database available through Park’s library(2), find, summarize, and discuss 1 peer-reviewed academic article using the steps recommended in the book (3). The articles should directly relate to the subject area you are thinking about focusing on for your core assessment project (4).
Then for that article, show how you could do it in a completely unscientific way by changing the question and method using everything (5) you read from the book about what a good study is.
You may not copy any part of the article; your summary and discussion should be your own words, though in-text page based citation is required.
Neuman, W. L. (2017). Understanding research. Boston, MA: Pearson/Allyn and Bacon.
1. Hint: an online resource to help you learn about proper APA citation is http://www.bibme.org/ (Links to an external site.) . Try typing in the ISBN for an older version of our textbook (978-0-205-47153-9) and click “Find Book”. Then click “Select.” On the right hand side of the screen, you will find the citation. Now select “APA” from the bibliography list menu. This resource works for almost any reference source (book, magazine, newspaper, website, etc.)
Note, when you are in the Park online library, several of the databases enable you to generate the citations for articles. Additional citation resources are listed at the end of this document.
2. Park University Online Library: http://www.park.edu/Library/Links to an external site.
3. Chapter 2 : Within Chapter 2 you will find the needed steps for reviewing an academic article. (Yes, I know this was not part of the “assigned” readings for the week.) The reading “What to Record in Notes” provides good ideas.
4. In course overview module you will find a description of the Core Assessment project.
5. This part requires a little clarification. Here you should have some fun. Think about how you could alter the study in order to create a “ pseudo-science, pop-media, newsstand worthy ” result. How would the question of study (the hypothesis) need to change? What method of approach could you use to totally wreck any chance of a valid outcome? Once you have “overhauled” the study methodology, challenge your classmates to explain why your “replication” of the study is a bust. What principles of basic research have you violated, based upon your readings from Chapter 1? Why is it not “scientific”? Let’s see if they can figure it out – providing their explanations in their peer-follow up responses for the week.
To help create a clear format for the Main Entry responses posted this week, let us apply the following outline, breaking your main entry into 8 distinctive segments. Following this outline will ensure you complete all elements of the assignment and will facilitate peer feedback. Please label each section (e.g., Topic, Research Type). This will also expedite the grading of your discussion entries.
|1. Article Citation
2. Textbook Citation
|3. Topic||Describe the focus on the research conducted.|
|4. Research Type||Describe the study as exploratory, descriptive, explanatory, or evaluative (and offer why you think so).|
|5. Variety of Research Applied?||Was it a survey, an experiment, a content analysis, etc?|
|6. Overall Intended Application?||Was the research intended for basic social research of applied social research?|
|7. Your UNSCIENTIFIC Re-Do Ideas||Here you should have some fun. Think about how you could alter the study in order to create a “pop-media, newsstand worthy” result. How would the question of study (the hypothesis) need to change? What method of approach could you use to totally wreck any chance of a valid outcome?|
|8. Evaluate your Re-Do||Now challenge your classmates to explain why your “replication” of the study is a bust. What principles of basic research have you violated, based upon your readings from Chapter 1? Why is it not “scientific”? Let’s see if they can figure it out – providing their explanations in their peer-follow up responses for the week.|
What can discrete choice experiments do for you? Jennifer Cleland,1 Terry Porteous1 & Diane Sk�atun2
CONTEXT In everyday life, the choices we make are influenced by our preferences for the alternatives available to us. The same is true when choosing medical education, training and jobs. More often than not, those alternatives comprise multiple attributes and our ultimate choice will be guided by the value we place on each attribute relative to the others. In education, for example, choice of university is likely to be influenced by preferences for institutional reputation, location, cost and course content; but which of these attributes is the most influential? An understanding of what is valued by applicants, students, trainees and colleagues is of increasing importance in the higher education and medical job marketplaces because it will help us to develop options that meet their needs and preferences.
METHODS In this article, we describe the discrete choice experiment (DCE), a survey
method borrowed from economics that allows us to quantify the values respondents place on the attributes of goods and services, and to explore whether and to what extent they are willing to trade less of one attribute for more of another.
CONCLUSIONS To date, DCEs have been used to look at medical workforce issues but relatively little in the field of medical education. However, many outstanding questions within medical education could be usefully addressed using DCEs. A better understanding of which attributes have most influence on, for example, staff or student satisfaction, choice of university and choice of career, and the extent to which stakeholders are prepared to trade one attribute against another is required. Such knowledge will allow us to tailor the way medical education is provided to better meet the needs of key stakeholders within the available resources.
Medical Education 2018: 52: 1113–1124
1Centre for Healthcare Education Research and Innovation (CHERI), University of Aberdeen, Aberdeen, UK 2Health Economics Research Unit, University of Aberdeen, Aberdeen, UK
Correspondence: Jennifer Cleland, Centre for Healthcare Education Research and Innovation (CHERI) Polwarth Building, University of Aberdeen, Aberdeen, UK. AB25 2ZD Tel: 00 44 1224 437257; E-mail: [email protected]
1113ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education; MEDICAL EDUCATION 2018 52: 1113–1124
A previous article in ‘The Cross-cutting Edge’ series in this journal described the range of economic methods used in cost analyses that can be applied in medical education.1 Walsh et al. described the application of different methods of cost analysis to help define and value different forms of medical education, most commonly in monetary terms.1 We welcome the increasing focus on the economics of medical education, particularly in this time of increasing calls for accountability in health professional education.2 However, value is not just about money; value can also mean subjective worth, or what is important to an individual. Understanding value and relative value can help answer numerous questions in medical education and training. For example, what contributes most to student satisfaction with a particular rotation or course? What is the deciding factor in choosing a medical school or residency programme? What ‘packages’ might be most effective in attracting doctors to work in remote and rural positions?
Although there is an extensive literature examining what is important to patients in terms of delivery of care,3–7 looking at what is valued in education and medical education is relatively new (see later in this paper for examples). Yet knowing what is valued by applicants, students, trainees and colleagues is of increasing importance in the higher education and medical job marketplaces. Medical schools, residency/training programmes and employers are under increasing pressure to provide a high-quality, consumer-centred experience in a resource- constrained educational and occupational marketplace. Medical education and training are commodities, and when assessing the value of a commodity, it is important to consider the views of consumers. In other words, in order to develop a commodity that is workable (i.e. that meets the needs and preferences of users), providers and policymakers need to consider not only their own preferences (and constraints), but also those of users.
As a first step in addressing this gap in medical education research, the current paper is a synopsis of theory and findings published predominantly in health economics which are relevant to the health professions education community.8 We focus on a quantitative research method known as the discrete choice experiment (DCE), which is frequently used
within a cost–benefit analysis framework. The aim of our paper was to summarise what DCEs are, what they involve and how they have been used previously, and to suggest ways in which they can be used to inform how different aspects of medical education and training might be optimised. Throughout this paper, we will use actual and hypothetical examples and case studies to illustrate the processes and possibilities of conducting DCE work within medical education.
WHAT IS A DCE?
The DCE is a multidimensional stated preference (SP) method9 used to elicit respondents’ preferences for attributes of an item under investigation. Stated preference methods are used to elicit an individual’s preferences for ‘alternatives’ (whether goods, services or courses of action) when actual behaviour cannot be observed. The DCE allows us to value individually, or as a ‘bundle’, the component parts (attributes) of goods, services or interventions in monetary terms or alternative relevant measures, from an individual or societal perspective.10–12 Crucially, DCEs also enable us to determine the relative importance of those attributes and how people might trade less of one attribute for more of another.
Discrete choice experiment surveys originally evolved from conjoint analysis methods developed in the 1970s, when they were predominantly used in the domains of market research and transport studies to understand consumer demand for goods and services.13 In conjoint analysis, participants are typically offered a predetermined set of potential products or services, and their responses (preferences) are analysed to determine the implicit valuation of the individual elements of the product or service. Take, for example, the act of buying a new car. The deciding factors might be price, brand, hatchback, sunroof or hybrid engine. How do those considering buying a new car trade between these factors?
The field evolved and emphasis shifted from conjoint analysis approaches, based on mathematical theory, to the DCE approach, which is based on theories of choice behaviour.14 The DCE is underpinned by two key theories. The first of these, Lancaster’s characteristics theory of value, is based on the idea that the value (or utility or satisfaction) that an individual associates with any
1114 ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education; MEDICAL EDUCATION 2018 52: 1113–1124
J Cleland et al
item (good) ‘is derived from the characteristics (also known as attributes) that make up the good, rather than the good per se’.15 The utility (U) associated with an item is thus represented as a function of all attributes:
U ¼ U ðX1;X2. . .XkÞ where X represents the utility associated with each of the k attributes of the item under investigation. The second underpinning theory is random utility theory (RUT). This theory posits that individuals make choices based on their personal preferences (observable factors), but that choices can also be influenced by random, unexplainable factors.16 Thus, for an alternative j (i.e. a scenario comprising all attributes at specified levels), the utility function (U) of an individual (n) can be represented as:
Unj ¼ Vnj þ enj where V is the ‘systematic’ component of the function and e is a ‘random’ component. Further, the systematic component (V) is a function of the attributes and levels (i.e. the observable components) of the item under investigation:
Vjn ¼ ASCj þ b1Xnj1 þ b2Xnj2. . .bKXnjK
where ASC ‘captures the mean effect of the unobserved factors in the error terms for each of the alternatives’17 and b-values are the regression coefficients for each of the attributes and are used to quantify strength of preference.
Consistent with their origins in consumer theory, DCEs operate under an assumption of utility- maximising behaviour (i.e. DCE respondents are ‘rational agents’ and will prefer the options that offer the greatest utility [or value or satisfaction] for least outlay).18
WHY USE A DCE?
It seems we are forever asking, and being asked, to state preferences. The most commonly used methods for eliciting preferences are ranking and rating scales such as those that ask the respondent to express how much he or she agrees or disagrees with a particular statement (e.g. ‘This training was a good use of my time’) on a numeric scale.19 Some forms of best–worse scaling, another technique of eliciting preferences increasingly used within health care, can be considered extensions of a ranking
exercise.20 The DCE differs from traditional ranking and rating approaches in its assumptions, format and possibilities.
Typically, in a DCE survey, respondents are asked to answer a series of questions (choice sets) in which they must choose between two or more similar items (alternatives or scenarios) that are described in terms of a number of attributes, differing only in the levels allocated to those attributes (Fig. 1 gives an example of a choice set). By systematically varying these levels, regression analyses can quantify not only the relative values respondents place on individual attributes, but also the degree to which they are prepared to trade less of one attribute for more of another (see later for further explanation).
Once a good or service has been deconstructed into its component attributes in this way, we can reconstruct it again into specific scenarios of interest and directly compare the levels of utility (or value or satisfaction) associated with each of these scenarios against one another. For example, the attributes of a health service might be distance to clinic, waiting time, consultation time and the health care professional seen. A service delivered locally by a nurse with a waiting time of two months and consultation time of 10 minutes could be compared with the same nurse-led service in an out- of-town clinic with a waiting time of three months and a consultation time of 20 minutes.
Trading between attributes
The capability to quantify trading behaviour is a key advantage of DCEs. In an ideal world, we would all like the best of everything (such as shorter waiting time and a longer appointment in the previous scenario); in reality, scarce resources mean we may have to compromise. Unlike ranking and rating scales, DCEs can inform providers’ and policymakers’ decisions about where, and to what degree, less favourable substitutions can be made and corresponding compensations applied. Where ‘Cost’ is included as an attribute, we can calculate how respondents value attributes in terms of monetary units and how much money must be offered to compensate for less preferred options (see below).
However, as well as identifying financial compensations, DCE findings can also be used to demonstrate how offering more of a ‘less important’ attribute (i.e. less preferred) might compensate for providing less of an ‘important’ one. For example, a 2015 study by Holte et al.21
1115ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education; MEDICAL EDUCATION 2018 52: 1113–1124
What can DCEs do for you?
surveyed Norwegian final-year medical students and interns to explore how factors such as practice size or opportunities to control working hours might influence their choices between jobs in rural and urban areas. The authors concluded that the probability of young doctors choosing a rural job over an urban one could be improved by manipulating these non-pecuniary factors.21
Valuation on a common metric
One way of evaluating a multi-component intervention would be to ask respondents to rate or rank the components in order of importance or preference. The problem with such methods is an inability to quantify the relative importance of components if those components are evaluated using non-quantifiable or dissimilar scales. For example, students might rank small tutorial groups more highly than weekly class assessments, but by how much do they prefer one over the other? Rating exercises often use satisfaction scales, but
these can vary between applications (e.g. the number of points on the scale, the labels used) making it difficult to compare across studies, and ‘units’ are difficult to quantify.
When one of a DCE’s attributes is ‘Cost’ (or an alternative monetary measure, such as ‘Salary’ or ‘Fees’), preferences can be measured using the common metric of monetary units. Estimates of willingness to pay (WtP) can be calculated for marginal changes in attribute levels to allow for direct comparisons both across attribute levels and between attributes. Other terms are sometimes used in place of WtP, such as ‘willingness to accept’ or ‘willingness to forgo’, but the calculations are identical. For example, in a 2012 study undertaken by Rockers et al.22 (illustrated in Fig. 1), researchers calculated respondents’ willingness to forgo salary in exchange for better working conditions.
The ‘Cost’ attribute can be described using defined sums of money in a relevant currency or,
Imagine that you have just completed your medical school training and you have also COMPLETED YOUR INTERNSHIP AND BEEN CONFIRMED. You have decided NOT to go directly into specialty training. Rather, you have decided to begin working as a general practitioner. You are checking the newspaper for available job postings, and find that there are two postings available in government run health facilities. Both of the facilities in these postings are located in rural areas. Both facilities are equal distance from the nearest big town, and are equal distance from Kampala. Also, both of these facilities are in areas that are entirely safe from violent conflict. However, each of these two postings has different benefits, including: salary, housing, the quality of the facility, the length of time you are committed, preferences given for study placement after the commitment is over, and support from the district health officer.
Please imagine yourself in this situation and make a real decision as to which of these two postings you would prefer. Although we know that some government benefits to health workers have not been properly implemented in the past, please assume that you will receive the full benefits described for your posting. In making your choice, please read carefully the full list of benefits for each posting and do not imagine any additional features of these postings.
Please tell us which of these job pos�ngs you prefer. Choose by clicking one of the bu�ons below:
Pos�ng A Pos�ng B
Quality of the facility
Basic (e.g. unreliable electricity, equipment and drugs and supplies not always available)
Advanced (e.g. reliable electricity, equipment and drugs and supplies always available)
Housing Free basic housing provided Housing allowance provided, enough to afford basic housing
Length of commitment
You are commi�ed to this posi�on for 2 years
You are commi�ed to this posi�on for 5 years
The government will pay your full tui�on for a study program (e.g. specialty training) a�er your commitment is over
The government will not provide any financial assistance for a study program a�er your commitment is over
Salary 700,000 USh per month 1,000,000 USh per month
The district health officer in your district is suppor�ve and makes work easier
The district health officer in your district is not suppor�ve and makes work more difficult
Figure 1 Medical students’ discrete choice experiment choice question from Rockers et al.22
1116 ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education; MEDICAL EDUCATION 2018 52: 1113–1124
J Cleland et al
alternatively, can be represented as a proportion of a notional sum, such as average salary. For example, a 2016 DCE carried out by Cleland et al. concerned medical trainees’ preferences for characteristics of a training post. Respondents were asked to choose between two hypothetical posts in which levels of the ‘Potential earnings’ attribute were set at 5%, 10% or 20% above average earnings.23 Findings from this study revealed that trainees would accept a move from a position with ‘good’ working conditions (defined by ‘rotas, amount of on-call time, time off, staffing levels, etc.’) to one with ‘poor’ conditions if they were compensated with potential earnings of 49.8% above average earning potential (all other attributes being equal).
Where inclusion of a cost attribute is, for whatever reason, inappropriate, the relative values of attributes can be measured using an alternative continuous metric (e.g. time) or, indeed, as a ratio of whatever natural units attributes are measured in.
Alternative methods often used to value goods or services using the common metric of money include contingent valuation (CV). In this case, respondents are asked to state their WtP for an item under investigation. By contrast with the DCE, which provides an indirect measure of WTP, CV methods require a direct response to a question such as: ‘How much would you be willing to pay for this item?’ The design of a CV experiment comes with its own set of challenges.9 However, its main drawback in comparison with a DCE is that it values items as a whole, rather than as a set of component attributes. Thus, it is difficult to determine from a CV experiment how one might go about modifying a service or intervention to improve the utility it offers.
Choices that mimic real life
Choices made in a DCE are more similar to those in real-life situations than choices in most other valuation techniques. Consumers are regularly faced with multi-attribute decisions in the marketplace, be it when buying food or clothing or when choosing a holiday or a car. Implicit in those decision-making situations is a weighing up of the pros and cons of the alternatives on offer. As described above, DCEs are multidimensional and allow for trading of the component parts of the item under scrutiny. This similarity to real-life situations is likely to have a positive impact on the validity of the findings generated in DCEs.
A testing ground
The hypothetical nature of a DCE makes it a useful method for assessing goods, services or interventions that do not yet exist (because this information cannot be observed through actual behaviour [revealed preferences]), or where we can only observe a single net effect of the good/service/ intervention (because valuable information about the component parts is unobservable). Thus, proposals for new ways of providing, for example, medical education or alternative medical career paths can be evaluated in the first instance without ‘real’ (revealed preference) data that may not be feasible to collect or can only be collected through costly pilots. For example, Robyn et al.24 conducted a DCE amongst students and health workers in Cameroon to explore the impact of incentives on preferences for rural posts. Analysis of the preference data included estimating the impact on preferences of 10 separate ‘packages’, each of which offered different incentives. Clearly, it would be unfeasible to test such a large number of packages in real life. Instead, the DCE provided information about hypothetical packages that policymakers and providers could use to decide which incentives were most likely to achieve the desired ends (improved recruitment and retention of health workers in rural areas) within available budgets.
DESIGN AND ANALYSIS OF THE DCE
Crucial to the development of a DCE is the selection of attributes and levels. It is self-evident that when DCEs are undertaken to inform policy or practice, attributes and their levels must be plausible and actionable; there would be little point in valuing items that are unrealistic or undeliverable, or with which respondents cannot engage. This step in DCE design is important to ensure that participants can understand and engage with the experiment, and that the results are of practical use.
Best practice dictates that, as well as reviewing the relevant literature, qualitative methods are employed to explore which aspects of the item under investigation are important to stakeholders.25,26 Qualitative methods allow researchers to not only define the range of potential attributes, but also to achieve an in-depth
1117ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education; MEDICAL EDUCATION 2018 52: 1113–1124
What can DCEs do for you?
understanding of the context in which the attributes exist, and the language commonly used by stakeholders to describe them. For example, in a study exploring medical students’ preferences for characteristics of rural medical postings in Ghana, Kruk et al.27 conducted seven focus groups with third- and fifth-year medical students to collect data for attribute development. Discussions covered students’ experiences and perceived barriers and motivators to rural practice, as well as their career plans. To ensure that consideration was given to a wide range of perspectives, focus group participants were also asked to consider potential attributes extracted from a literature review and from discussions with practising and governmental physicians.27
Contextualising the experiment
It is important that all respondents in any given DCE answer the same fundamental question. This means not only that they are given the same choice sets, but also that they make the choices under the same conditions. As far as is possible, the researcher must try to cut down on ‘background noise’ and reduce unmeasured variation in respondents’ decisions.
For example, in Uganda, in response to difficulties in recruiting and retaining health workers in rural areas, Rockers et al.22 undertook a DCE to analyse the preferences of medical, nursing, pharmacy and laboratory students for potential rural postings. Figure 1 shows an example from the online DCE sent to medical students, in which, prior to choosing their preferred posting, respondents are asked to imagine the scene.
These additional instructions aim to ensure that respondents have a clear idea of the circumstances surrounding the choice (which may differ from the situation in which they find themselves at present), thus minimising any variation in the interpretation of the choice situation.
Generating the DCE design
A full account of DCE design is beyond the scope of this article; such information is, however, readily available from the existing literature.17,28 Briefly, once attributes and levels have been decided upon, statistical software such as SAS (SAS Institute, Inc., Cary, NC, USA) or NGENE (Choice Metrics, Sydney, NSW, Australia) is most commonly used to create and select a series of hypothetical scenarios (also
known as ‘alternatives’ or ‘profiles’), each of which presents the included attributes, set at different levels. This is illustrated in Figure 1, in which respondents must choose between Posting A and Posting B. When these scenarios are combined in sets of two or more, the resulting ‘choice sets’ will represent a statistically efficient design, or one that will collect sufficient information to allow preferences to be estimated with acceptable precision.
In general, DCEs are embedded in surveys, and the principles of good survey design are as important in DCEs as in any other survey.29,30 These include qualities such as ease of reading, clear instructions, absence of questions likely to lead to bias, appropriate response categories, the logical ordering of questions and so on. Survey mode should also be carefully selected. Increasingly, the Internet is used to administer surveys, including DCEs, for reasons that include lower research costs. Researchers must, however, be aware that using different modes (e.g. mail, Internet, interviews) to collect data can lead to variations in representativeness, convergent validity and data quality.31,32 The choice of survey mode will depend on the research question and population of interest and findings should be interpreted with mode effects in mind.
The analysis of DCEs has advanced since their first application in health care and continues to evolve. Briefly, choice data are, in general, analysed using regression techniques. In early examples of DCEs, the most frequently used regression analysis was conditional logit (also known as multinomial logit [MNL]), a technique similar to logistic regression. More recently, other models have been used to explore variability in preferences (i.e. how individual respondents differ in their preferences) using, for example, mixed logit and latent class logit models.33,34
Limitations and developments
The collection of data from hypothetical scenarios in DCEs has drawn some criticism about their external validity and the possibility of hypothetical bias: how can we be sure that choices made under hypothetical circumstances reflect those that respondents will make in real life? Few studies have
1118 ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education; MEDICAL EDUCATION 2018 52: 1113–1124
J Cleland et al
considered the external validity of health-related DCEs; a recent systematic review and meta-analysis included eight such studies that had tested external validity by comparing predicted choices with those observed in real life.35 The authors found that DCEs had ‘moderate, but not exceptional, accuracy when predicting health-related choices’; pooled sensitivity and specificity estimates were 88% (95% confidence interval [CI] 81–92%) and 34% (95% CI 23–46%), respectively. Their conclusions concur with those of Janssen et al.,36 who, while acknowledging that the DCE represents a useful way of eliciting preferences, urge caution when interpreting findings in the light of this uncertainty about external validity.
The inclusion of a Cost attribute, although extremely convenient from the researcher’s point of view, can lead to problems; a common metric based on money is useful to calculate WtP and thus compare the magnitude of the values respondents place on attributes, but it does not necessarily follow that they would actually be willing to pay the estimated amounts. Caution, therefore, should be exercised when interpreting WtP estimates.
A further potential problem is that some respondents may object to the idea of paying for the item under investigation, which can lead to protest behaviour whereby respondents simply do not consider the other attributes and do not engage with the experiment. This can often happen in the context of health care in the UK, which is usually free at the point of delivery, but may not be an issue in other contexts in which paying for health care is the norm. Additionally, WtP estimates can be affected by issues around ability to pay; respondents’ choices may be influenced by how much they can afford, rather than by how they value the attributes.
The role of heuristics, developed within cognitive psychology,37 and its implications for choices made within DCEs has also been considered within DCEs.38 An heuristic is an efficient rule that is followed to simplify a complex decision-making task. One such rule, attribute non-attendance (ANA), may lead to the systematic exclusion from decision making of certain attributes. Research is also utilising eye tracking techniques to consider the divergence between stated attendance of attributes and visual attendance.39 More recent research has investigated whether evidence of ANA within DCEs points to respondents’ simplifying of the
hypothetical task (with the associated implications for biases within preference estimation) or whether it reflects actual preferences.40 Eye tracking is also used to consider how the processing of the information on offer within the survey design (such as attribute order) might influence the choices made.41
EXAMPLES OF DCES FROM THE LITERATURE
A small number of DCEs have been used to elicit preferences concerning educational issues and early career choices. Other published DCEs, indirectly linked to medical education, have elicited preferences for different aspects of jobs or careers in health-related professions.
Discrete choice experiments can be used to inform the content and format of medical education. Cunningham et al.42 conducted a DCE to establish medical students’ preferences for the way in which the MD programme at McMaster University (Hamilton, ON, Canada) was organised. The aim was to engender student engagement with the education programme, on the assumption that this would improve its effectiveness. Medical students were asked to choose between alternative MD programmes; 15 attributes were used to describe the hypothetical programmes, including tutorial group size, the degree to which tutorials were web-enhanced, the role of tutors and the format of tutorial problems. The study concluded that: ‘. . .most students preferred a small group, web-supported, problem- based learning approach led by content experts who facilitated group process.’42 Findings also suggested, however, that students would accept a less preferred programme if financial savings were to be reinvested in, for example, web-enhanced tutorial processes.
Other studies exploring non-medical students’ preferences for aspects of education have been undertaken. These include:
� a DCE was used to explore relative preferences for various features of assignment systems, including the form of the assignment (online/paper), its relevance to examinations and the nature of any feedback on the assignment, amongst undergraduate business students in Ireland;43
1119ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education; MEDICAL EDUCATION 2018 52: 1113–1124
What can DCEs do for you?
� a DCE to measure preferences for different attributes of an educational institution, including staff, syllabus and fees, was conducted in students at the London School of Hygiene and Tropical Medicine, and44
� a DCE that aimed to measure preferences for various characteristics of higher education institutes, such as travel time from home, the reputation of the course being offered and fees, was conducted amongst secondary school students in Ireland.45
Attracting secondary school students from particular segments of society to study medicine was the subject of a DCE study by researchers in Japan.46 The purpose of this study was to explore the likelihood of attracting students from low- and middle-income families by offering scholarships to private medical schools.
Discrete choice experiments have also been used to explore preferences for postgraduate medical education. Mandeville et al.47 used a DCE to determine how, amongst other job characteristics, location of specialty training might affect Malawian
junior doctors’ decisions about whether or not to stay in the country. A DCE amongst Danish general practitioners compared the characteristics of alternative continuous professional development (CPD) programmes.48
Career preferences and workforce planning
Cleland et al.23 conducted a DCE amongst doctors in training posts to establish which characteristics of their jobs they most valued (see above). The same DCE carried out amongst final-year medical students revealed that the relative values of the attributes were similar to those observed among trainee doctors; the most highly valued for both groups was ‘working conditions’.49 The authors propose that the findings from these studies will be useful to health care organisations because the job attributes considered are those they are likely to have some control over.23,49
Hence, training positions in less popular specialties or geographic areas could be made more attractive by manipulating the attributes under scrutiny.
However, the majority of DCE studies looking at workforce issues have been conducted in low- and
Table 1 Some examples of discrete choice experiments investigating recruitment and retention of health care workers in remote and rural areas in low- and middle-income countries
Study Country Participants Aim
Kruk et al. (2010)27 Ghana Final-year medical students To assess ‘how students’ stated preference for certain rural
postings was influenced by various job attributes’
Hanson & Jack (2010)62 Ethiopia Doctors and nurses ‘to better understand how health care workers might be
influenced to practise in rural settings’
Vujicic et al. (2011)63 Vietnam Physicians ‘to explore the key factors that determine physician motivation
and job satisfaction’
Rockers et al. (2012)64 Uganda Students (medical, nursing,
‘to better inform the selection of appropriate recruitment and
retention interventions based on health worker preferences’
Miranda et al. (2012)65 Peru Doctors ‘to investigate doctors’ stated preferences for rural jobs’
Rao et al. (2013)66 India Final-year medical/nursing
doctors/nurses serving at
primary health centres
To examine ‘job preferences of doctors and nurses to inform
what works in terms of rural recruitment strategies’
McAuliffe et al. (2016)67 Malawi,
Obstetric care workers ‘to examine the employment preferences of obstetric care
workers across three east African countries’
Efendi et al. (2016)68 Indonesia Students (medical,
‘to analyse the job preferences of health students to develop
effective policies to improve the recruitment and retention
of health students in remote areas’
1120 ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education; MEDICAL EDUCATION 2018 52: 1113–1124
J Cleland et al
middle-income economies with territories that include large areas that are remote and rural or subject to political instability. Many such countries in Africa, Asia and Central and South America have well-documented problems in recruiting and retaining health professionals and health workers of all levels of experience. A substantial number of DCEs have been undertaken in attempts to find what might make jobs in these areas more attractive. Table 1 lists some examples.
Discrete choice experiment studies looking at preferences for health care jobs have additionally been undertaken in high-income economies such as Australia, Denmark, Canada, the USA and the UK. The aims of these studies varied, but included exploring preferences for jobs in remote and rural areas,50–52 jobs in primary care,53–55 jobs in secondary care56 and alternative payment systems.57
In summary, DCEs have been used frequently to look at medical workforce issues, but at relatively little in the field of medical education. In the wider education literature, there are a few examples of the use of DCEs to assess other preferences related to assessment systems and satisfaction with programme/course qualities. There seems to us to be a number of outstanding questions within medical education that could be usefully addressed using DCEs. For example, what components underpin student satisfaction with specific aspects of a course (e.g. longitudinal clerkships, remote and rural placements) as well as the course more generally? (Knowing this could inform curricular design.) If students value 10 different aspects of feedback practice, which do they value most? (Knowing this could inform or focus staff training.) What do applicants value when selecting a medical school or residency, or medical job? (What can a medical school influence? Which factors are non- adjustable?) Exploring what applicants, students, residents and colleagues value, in order to understand what shapes their choices, will help those involved in planning and delivering education to meet consumers’ needs and expectations. To return to our earlier point, in an ideal world we would all like the best of everything, but life is not ideal. If we ask our consumers what they want using tools that do not enable us to identify what is most important to them, we are in danger of failing to meet their needs.
Medical education research is currently a small field of research, and one that has drawn heavily on expertise, approaches and theories from other fields. This has enabled medical education research to move relatively rapidly from local evaluation and audit to considering questions and problems more generally and in terms of how they may contribute to new knowledge. We believe that DCE methodology has the potential to address many outstanding questions in medical education and training and to provide more refined information than some traditional approaches. Extending the use of the method in medical education may also facilitate working with stakeholders outside academia (e.g. providers and policymakers), as well as establishing partnerships with expert colleagues from health economics. This transdisciplinary working may provide the potential to identify and create new opportunities and questions.58
For readers who desire a full account of DCE design along with practical guidance, we recommend ‘How to conduct a discrete choice experiment for health workforce recruitment and retention in remote and rural areas: a user guide with case studies’, published by the World Health Organization.59
Additional reading is available from the International Society for Pharmacoeconomics and Outcomes Research (ISPOR).60,61
Contributors: JC and DS conceived the idea for this article. TP contributed to the design of the work and the interpretation of the literature for the work, and wrote the first draft. All authors revised the article critically for important intellectual content and approved the final version. Acknowledgements: none. Funding: none. Conflicts of interest: none. Ethical approval: not applicable.
1 Walsh K, Levin H, Jaye P, Gazzard J. Cost analyses approaches in medical education: there are no simple solutions. Med Educ 2013;47 (10):962–8.
2 Baron RB. Can we achieve public accountability for graduate medical education outcomes? Acad Med 2013;88 (9):1199–201.
3 Fawsitt CG, Bourke J, Greene RA, McElroy B, Krucien N, Murphy R, Lutomski JE. What do women want? Valuing women’s preferences and estimating demand for alternative models of maternity care using a discrete choice experiment. Health Policy 2017;121 (11):1154–60.
1121ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education; MEDICAL EDUCATION 2018 52: 1113–1124
What can DCEs do for you?
4 Whitaker KL, Ghanouni A, Zhou Y, Lyratzopoulos G, Morris S. Patients’ preferences for GP consultation for perceived cancer risk in primary care: a discrete choice experiment. Br J Gen Pract 2017;67 (659):e388– 95.
5 Murchie P, Norwood PF, Pietrucin-Materek M, Porteous T, Hannaford PC, Ryan M. Determining cancer survivors’ preferences to inform new models of follow-up care. Br J Cancer 2016;115 (12):1495–503.
6 Mankowski C, Ikenwilo D, Heidenreich S, Ryan M, Nazir J, Newman C, Watson V. Men’s preferences for the treatment of lower urinary tract symptoms associated with benign prostatic hyperplasia: a discrete choice experiment. Patient Prefer Adherence 2016;10:2407–17.
7 Porteous T, Ryan M, Bond C, Watson M, Watson V. Managing minor ailments; the public’s preferences for attributes of community pharmacies. A discrete choice experiment. PLoS One 2016;11 (3):e0152257.
8 Eva KW. The cross-cutting edge: striving for symbiosis between medical education research and related disciplines. Med Educ 2008;42 (10):950–1.
9 Bridges JF. Stated preference methods in health care evaluation: an emerging methodological paradigm in health economics. Appl Health Econ Health Policy 2003;2 (4):213–24.
10 Ryan M. Discrete choice experiments in health care. BMJ 2004;328 (7436):360–1.
11 De Bekker-Grob E, Ryan M, Gerard K. Discrete choice experiments in health economics: a review of the literature. Health Econ 2012;21:145–72.
12 Ryan M, Gerard K. Using discrete choice experiments to value health care programmes: current practice and future research reflections. Appl Health Econ Health Policy 2003;2 (1):55–64.
13 Green P, Srinivasan S. Conjoint analysis in consumer research: issues and outlook. J Consum Res 1978;5 (2):103–23.
14 Louviere JJ, Flynn TN, Carson RT. Discrete choice experiments are not conjoint analysis. J Choice Model 2010;3 (3):57–72.
15 Lancaster KJ. A new approach to consumer theory. J Polit Econ 1966;74 (2):132–57.
16 McFadden D. Conditional logit analysis of qualitative choice behavior. In: Zarembka P, ed. Frontiers in Econometrics. New York, NY: Academic Press 1974;105–42.
17 Ryan M, Gerard K, Amaya-Amaya M. Using Discrete Choice Experiments to Value Health and Health Care, 1st edn. Dordrecht: Springer 2008.
18 Mas-Colell A, Whinston M, Green J. Microeconomic Theory. New York, NY: Oxford University Press 1995.
19 Ryan M, Scott D, Reeves C, Bate A, van Teijlingen E, Russell E, Napper M, Robb CM. Eliciting public preferences for healthcare: a systematic review of techniques. Health Technol Assess 2001;5 (5):1–186.
20 Cheung KL, Wijnen BFM, Hollin IL, Janssen EM, Bridges JF, Evers SM, Hiligsmann M. Using best–worst scaling to investigate preferences in health care. Pharmacoeconomics 2016;34 (12):1195–209.
21 Holte JH, Kjaer T, Abelsen B, Olsen JA. The impact of pecuniary and non-pecuniary incentives for attracting young doctors to rural general practice. Soc Sci Med 2015;128:1–9.
22 Rockers PC, Jaskiewicz W, Wurts L, Kruk ME, Mgomella GS, Ntalazi F, Tulenko K. Preferences for working in rural clinics among trainee health professionals in Uganda: a discrete choice experiment. BMC Health Serv Res 2012;12 (1):212.
23 Cleland J, Johnston P, Watson V, Krucien N, Sk�atun D. What do UK doctors in training value in a post? A discrete choice experiment. Med Educ 2016;50 (2):189–202.
24 Robyn PJ, Shroff Z, Zang OR, Kingue S, Djienouassi S, Kouontchou C, Sorgho G. Addressing health workforce distribution concerns: a discrete choice experiment to develop rural retention strategies in Cameroon. Int J Health Policy Manag 2015;4 (3):169–80.
25 Coast J, Al-Janabi H, Sutton EJ, Horrocks SA, Vosper AJ, Swancutt DR, Flynn TN. Using qualitative methods for attribute development for discrete choice experiments: issues and recommendations. Health Econ 2012;21 (6):730–41.
26 Louviere J, Hensher D, Swait J. Stated Choice Methods: Analysis and Application. Cambridge: Cambridge University Press 2000.
27 Kruk ME, Johnson JC, Gyakobo M, Agyei-Baffour P, Asabir K, Kotha SR, Kwansah J, Nakua E, Snow RC, Dzodzomenyo M. Rural practice preferences among medical students in Ghana: a discrete choice experiment. Bull WHO 2010;88 (5):333–41.
28 Lancsar E, Louviere J. Conducting discrete choice experiments to inform healthcare decision making: a user’s guide. Pharmacoeconomics 2008;26 (8): 661–77.
29 Dillman DA, Smyth JD, Christian LM. Internet, Mail and Mixed-Mode Surveys: The Tailored Design Method, 3rd edn. Hoboken, NJ: John Wiley & Sons 2009.
30 Stopher P. Collecting, Managing and Assessing Data Using Sample Surveys. Cambridge: Cambridge University Press 2012.
31 Determann D, Lambooij MS, Steyerberg EW, de Bekker-Grob EW, de Wit GA. Impact of survey administration mode on the results of a health- related discrete choice experiment: online and paper comparison. Value Health 2017;20 (7):953–60.
32 Boyle KJ, Morrison M, MacDonald DH, Duncan R, Rose J. Investigating internet and mail implementation of stated-preference surveys while controlling for differences in sample frames. Environ Resour Econ 2016;64 (3):401–19.
33 Hauber AB, Gonz�alez JM, Groothuis-Oudshoorn CGM, Prior T, Marshall DA, Cunningham C, IJzerman MJ, Bridges JF. Statistical methods for the analysis of discrete choice experiments: a report of the ISPOR conjoint analysis good research practices task force. Value Health 2016;19 (4):300–15.
34 Train K. Discrete Choice Methods with Simulation, 2nd edn. Cambridge: Cambridge University Press 2009.
1122 ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education; MEDICAL EDUCATION 2018 52: 1113–1124
J Cleland et al
35 Quaife M, Terris-Prestholt F, Di Tanna GL, Vickerman P. How well do discrete choice experiments predict health choices? A systematic review and meta-analysis of external validity. Eur J Health Econ 2018;https://doi.org/10.1007/s10198-018- 0954-6. [Epub ahead of print.]
36 Janssen EM, Marshall DA, Hauber AB, Bridges JFP. Improving the quality of discrete-choice experiments in health: how can we assess validity and reliability? Expert Rev Pharmacoecon Outcomes Res 2017;17 (6):531– 42.
37 Kahneman D. Attention and Effort. Englewood Cliffs, NJ: Prentice-Hall 1973.
38 Scarpa R, Gilbride TJ, Campbell D, Hensher DA. Modelling attribute non-attendance in choice experiments for rural landscape valuation. Eur Rev Agric Econ 2009;36 (2):151–74.
39 Balcombe K, Fraser I, McSorley E. Visual attention and attribute attendance in multi-attribute choice experiments. J Appl Econom 2015;30 (3):447–67.
40 Heidenreich S, Watson V, Ryan M, Phimister E. Decision heuristic or preference? Attribute non- attendance in discrete choice problems. Health Econ 2018;27 (1):157–71.
41 Ryan M, Krucien N, Hermens F. The eyes have it: using eye tracking to inform information processing strategies in multi-attributes choices. Health Econ 2018;27 (4):709–21.
42 Cunningham CE, Deal K, Neville A, Rimas H, Lohfeld L. Modeling the problem-based learning preferences of McMaster University undergraduate medical students using a discrete choice conjoint experiment. Adv Health Sci Educ Theory Pract 2006;11 (3):245–66.
43 Kennelly B, Flannery D, Considine J, Doherty E, Hynes S. Modelling the preferences of students for alternative assignment designs using the discrete choice experiment methodology. Pract Assess Res Eval 2014;19 (16):1–13.
44 Sheppard P, Smith R. What students want: using a choice modelling approach to estimate student demand. J High Educ Policy Manage 2016;38 (2): 140–9.
45 Walsh S, Flannery D, Cullinan J. Analysing the preferences of prospective students for higher education institution attributes. Educ Econ 2018;26 (2):161–78.
46 Goto R, Kakihara H. A discrete choice experiment studying students’ preferences for scholarships to private medical schools in Japan. Hum Resour Health 2016;14:4. https://doi.org/10.1186/s12960-016-0102-2. [Epub ahead of print.]
47 Mandeville KL, Ulaya G, Lagarde M, Muula AS, Dzowela T, Hanson K. The use of specialty training to retain doctors in Malawi: a discrete choice experiment. Soc Sci Med 2016;169:109–18.
48 Kjaer NK, Halling A, Pedersen LB. General practitioners’ preferences for future continuous professional development: evidence from a Danish
discrete choice experiment. Educ Prim Care 2015;26 (1):4–10.
49 Cleland JA, Johnston P, Watson V, Krucien N, Sk�atun D. What do UK medical students value most in their careers? A discrete choice experiment. Med Educ 2017;51 (8):839–51.
50 Gallego G, Dew A, Lincoln M, Bundy A, Chedid RJ, Bulkeley K, Brentnall J, Veitch C. Should I stay or should I go? Exploring the job preferences of allied health professionals working with people with disability in rural Australia. Hum Resour Health 2015;13 (1):53.
51 Scott A, Witt J, Humphreys J, Joyce C, Kalb G, Jeon S, McGrail M. Getting doctors into the bush: general practitioners’ preferences for rural location. Soc Sci Med 2013;96:33–44.
52 Li J, Scott A, McGrail M, Humphreys J, Witt J. Retaining rural doctors: doctors’ preferences for rural medical workforce incentives. Soc Sci Med 2014;121:56–64.
53 Wordsworth S, Sk�atun D, Scott A, French F. Preferences for general practice jobs: a survey of principals and sessional GPs. Br J Gen Pract 2004;54 (507):740–6.
54 Pedersen LB, Gyrd-Hansen D. Preference for practice: a Danish study on young doctors’ choice of general practice using a discrete choice experiment. Eur J Health Econ 2014;15 (6):611–21.
55 Scott A. Eliciting GPs’ preferences for pecuniary and non-pecuniary job characteristics. J Health Econ 2001;20 (3):329–47.
56 Ubach C, Scott A, French F, Awramenko M, Needham G. What do hospital consultants value about their jobs? A discrete choice experiment. BMJ 2003;326 (7404):1432–5.
57 Kessels R, Van Herck P, Dancet E, Annemans L, Sermeus W. How to reform western care payment systems according to physicians, policy makers, healthcare executives and researchers: a discrete choice experiment. BMC Health Serv Res 2015;15 (1):191.
58 McMichael A. What makes transdisciplinarity succeed or fail? First Report. In: Somerville MARD, ed. Transdisciplinarity: Recreating Integrated Knowledge. Oxford: EOLSS Publishers 2000;218–220.
59 World Health Organization. How to conduct a discrete choice experiment for health workforce recruitment and retention in remote and rural areas: a user guide with case studies. 2012. http://www.who. int/hrh/resources/dceguide/en/. [Accessed 2 August 2018.]
60 Bridges JFP, Hauber AB, Marshall D, Lloyd A, Prosser LA, Regier DA, Johnson FR, Mauskopf J. Conjoint analysis applications in health – a checklist: a report of the ISPOR Good Research Practices for Conjoint Analysis Task Force. Value Health 2011;14 (4):403–13.
61 Johnson FR, Lancsar E, Marshall D, Kilambi V, M€uhlbacher A, Regier DA, Bresnahan BW, Kanninen
1123ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education; MEDICAL EDUCATION 2018 52: 1113–1124
What can DCEs do for you?
B, Bridges JF. Constructing experimental designs for discrete-choice experiments: report of the ISPOR conjoint analysis experimental design good research practices task force. Value Health 2013;16 (1):3–13.
62 Hanson K, Jack W. Incentives could induce Ethiopian doctors and nurses to work in rural settings. Health Aff 2010;29 (8):1452–60.
63 Vujicic M, Shengelia B, Alfano M, Thu HB. Physician shortages in rural Vietnam: using a labor market approach to inform policy. Soc Sci Med 2011;73 (7):2034–70.
64 Rockers PC, Jaskiewicz W, Kruk ME, Phathammavong O, Vangkonevilay P, Paphassarang C, Phachanh IT, Wurts L, Tulenko K. Differences in preferences for rural job postings between nursing students and practicing nurses: evidence from a discrete choice experiment in Lao People’s Democratic Republic. Hum Resour Health 2013;11 (1):22.
65 Miranda JJ, Diez-Canseco F, Lema C, Lescano AG, Lagarde M, Blaauw D, Huicho L. Stated preferences of
doctors for choosing a job in rural areas of Peru: a discrete choice experiment. PLoS One 2012;7 (12):e50567.
66 Rao KD, Ryan M, Shroff Z, Vujicic M, Ramani S, Berman P. Rural clinician scarcity and job preferences of doctors and nurses in India: a discrete choice experiment. PLoS One 2013;8 (12):e82984.
67 McAuliffe E, Galligan M, Revill P, Kamwendo F, Sidat M, Masanja H, de Pinho H, Araujo E. Factors influencing job preferences of health workers providing obstetric care: results from discrete choice experiments in Malawi, Mozambique and Tanzania. Glob Health 2016;12 (1):86.
68 Efendi F, Chen C, Nursalam N, Andriyani NWF, Kurniati A, Nancarrow SA. How to attract health students to remote areas in Indonesia: a discrete choice experiment. Int J Health Plann Manage 2016;31 (4):430–45.
Received 18 January 2018; accepted for publication 5 June 2018
1124 ª 2018 John Wiley & Sons Ltd and The Association for the Study of Medical Education; MEDICAL EDUCATION 2018 52: 1113–1124
J Cleland et al
Copyright of Medical Education is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download, or email articles for individual use.
Are you busy and do not have time to handle your assignment? Are you scared that your paper will not make the grade? Do you have responsibilities that may hinder you from turning in your assignment on time? Are you tired and can barely handle your assignment? Are your grades inconsistent?
Whichever your reason is, it is valid! You can get professional academic help from our service at affordable rates. We have a team of professional academic writers who can handle all your assignments.
- Plagiarism free papers
- Timely delivery
- Any deadline
- Skilled, Experienced Native English Writers
- Subject-relevant academic writer
- Adherence to paper instructions
- Ability to tackle bulk assignments
- Reasonable prices
- 24/7 Customer Support
- Get superb grades consistently
Online Academic Help With Different Subjects
Students barely have time to read. We got you! Have your literature essay or book review written without having the hassle of reading the book. You can get your literature paper custom-written for you by our literature specialists.
Do you struggle with finance? No need to torture yourself if finance is not your cup of tea. You can order your finance paper from our academic writing service and get 100% original work from competent finance experts.
While psychology may be an interesting subject, you may lack sufficient time to handle your assignments. Don’t despair; by using our academic writing service, you can be assured of perfect grades. Moreover, your grades will be consistent.
Engineering is quite a demanding subject. Students face a lot of pressure and barely have enough time to do what they love to do. Our academic writing service got you covered! Our engineering specialists follow the paper instructions and ensure timely delivery of the paper.
In the nursing course, you may have difficulties with literature reviews, annotated bibliographies, critical essays, and other assignments. Our nursing assignment writers will offer you professional nursing paper help at low prices.
Truth be told, sociology papers can be quite exhausting. Our academic writing service relieves you of fatigue, pressure, and stress. You can relax and have peace of mind as our academic writers handle your sociology assignment.
We take pride in having some of the best business writers in the industry. Our business writers have a lot of experience in the field. They are reliable, and you can be assured of a high-grade paper. They are able to handle business papers of any subject, length, deadline, and difficulty!
We boast of having some of the most experienced statistics experts in the industry. Our statistics experts have diverse skills, expertise, and knowledge to handle any kind of assignment. They have access to all kinds of software to get your assignment done.
Writing a law essay may prove to be an insurmountable obstacle, especially when you need to know the peculiarities of the legislative framework. Take advantage of our top-notch law specialists and get superb grades and 100% satisfaction.
What discipline/subjects do you deal in?
We have highlighted some of the most popular subjects we handle above. Those are just a tip of the iceberg. We deal in all academic disciplines since our writers are as diverse. They have been drawn from across all disciplines, and orders are assigned to those writers believed to be the best in the field. In a nutshell, there is no task we cannot handle; all you need to do is place your order with us. As long as your instructions are clear, just trust we shall deliver irrespective of the discipline.
Are your writers competent enough to handle my paper?
Our essay writers are graduates with bachelor's, masters, Ph.D., and doctorate degrees in various subjects. The minimum requirement to be an essay writer with our essay writing service is to have a college degree. All our academic writers have a minimum of two years of academic writing. We have a stringent recruitment process to ensure that we get only the most competent essay writers in the industry. We also ensure that the writers are handsomely compensated for their value. The majority of our writers are native English speakers. As such, the fluency of language and grammar is impeccable.
What if I don’t like the paper?
There is a very low likelihood that you won’t like the paper.
- When assigning your order, we match the paper’s discipline with the writer’s field/specialization. Since all our writers are graduates, we match the paper’s subject with the field the writer studied. For instance, if it’s a nursing paper, only a nursing graduate and writer will handle it. Furthermore, all our writers have academic writing experience and top-notch research skills.
- We have a quality assurance that reviews the paper before it gets to you. As such, we ensure that you get a paper that meets the required standard and will most definitely make the grade.
In the event that you don’t like your paper:
- The writer will revise the paper up to your pleasing. You have unlimited revisions. You simply need to highlight what specifically you don’t like about the paper, and the writer will make the amendments. The paper will be revised until you are satisfied. Revisions are free of charge
- We will have a different writer write the paper from scratch.
- Last resort, if the above does not work, we will refund your money.
Will the professor find out I didn’t write the paper myself?
Not at all. All papers are written from scratch. There is no way your tutor or instructor will realize that you did not write the paper yourself. In fact, we recommend using our assignment help services for consistent results.
What if the paper is plagiarized?
We check all papers for plagiarism before we submit them. We use powerful plagiarism checking software such as SafeAssign, LopesWrite, and Turnitin. We also upload the plagiarism report so that you can review it. We understand that plagiarism is academic suicide. We would not take the risk of submitting plagiarized work and jeopardize your academic journey. Furthermore, we do not sell or use prewritten papers, and each paper is written from scratch.
When will I get my paper?
You determine when you get the paper by setting the deadline when placing the order. All papers are delivered within the deadline. We are well aware that we operate in a time-sensitive industry. As such, we have laid out strategies to ensure that the client receives the paper on time and they never miss the deadline. We understand that papers that are submitted late have some points deducted. We do not want you to miss any points due to late submission. We work on beating deadlines by huge margins in order to ensure that you have ample time to review the paper before you submit it.
Will anyone find out that I used your services?
We have a privacy and confidentiality policy that guides our work. We NEVER share any customer information with third parties. Noone will ever know that you used our assignment help services. It’s only between you and us. We are bound by our policies to protect the customer’s identity and information. All your information, such as your names, phone number, email, order information, and so on, are protected. We have robust security systems that ensure that your data is protected. Hacking our systems is close to impossible, and it has never happened.
How our Assignment Help Service Works
You fill all the paper instructions in the order form. Make sure you include all the helpful materials so that our academic writers can deliver the perfect paper. It will also help to eliminate unnecessary revisions.
2. Pay for the order
Proceed to pay for the paper so that it can be assigned to one of our expert academic writers. The paper subject is matched with the writer’s area of specialization.
3. Track the progress
You communicate with the writer and know about the progress of the paper. The client can ask the writer for drafts of the paper. The client can upload extra material and include additional instructions from the lecturer. Receive a paper.
4. Download the paper
The paper is sent to your email and uploaded to your personal account. You also get a plagiarism report attached to your paper.