Quality Of education| Engineering

Quality Of education| Engineering

Journal of Management Education 1 –32

© The Author(s) 2018 Reprints and permissions:

sagepub.com/journalsPermissions.nav DOI: 10.1177/1052562918787849

journals.sagepub.com/home/jme

Empirical/Theoretical/Review Articles

How Do Quality of Teaching, Assessment, and Feedback Drive Undergraduate Course Satisfaction in U.K. Business Schools? A Comparative Analysis With Nonbusiness School Courses Using the U.K. National Student Survey

Dylan Sutherland1, Philip Warwick1, John Anderson2, and Mark Learmonth1

Abstract How does quality of teaching, assessment, and feedback influence satisfaction with overall course quality for students taking business school (BS) undergraduate courses in the United Kingdom? Are these teaching-related determinants of satisfaction in BS courses different to those in nonbusiness school (NBS) courses? These questions currently figure prominently in U.K. higher education owing to the introduction of a “Teaching Excellence Framework,” linking student fee increases to levels of reported student satisfaction. The elevation of student satisfaction as a determinant of higher education delivery raises important questions about the possible longer term consequences for teaching practices. To explore these, we test three sets

1Durham University Business School, Mill Hill Lane, Durham, UK 2University of Northern Iowa, Cedar Falls, IA, USA

Corresponding Author: Philip Warwick, Durham University Business School, Mill Hill Lane, Durham DH1 3LB, UK. Email: philip.warwick@durham.ac.uk

787849 JMEXXX10.1177/1052562918787849Journal of Management EducationSutherland et al. research-article2018

https://us.sagepub.com/en-us/journals-permissions
https://journals.sagepub.com/home/jme
mailto:philip.warwick@durham.ac.uk
http://crossmark.crossref.org/dialog/?doi=10.1177%2F1052562918787849&domain=pdf&date_stamp=2018-07-16
2 Journal of Management Education 00(0)

of hypotheses relating to how teaching, assessment, and feedback quality affects satisfaction in the BS context, as well as comparative differences (i.e., BS vs. NBS students). We draw from over 1 million responses recorded in the U.K.’s National Student Survey. We find questions related to perceived teaching quality are important satisfaction drivers for BS students. In terms of differences with NBS students, we find intellectual stimulation appears of lesser importance to BS students, whereas fair assessments are of greater importance. BS students, we argue, exhibit a stronger orientation toward “instrumental” learning. We consider policy implications.

Keywords module evaluation, MEQ, student satisfaction, TEF, business school, regression analysis

Introduction

This research was motivated by: (a) a curiosity to better understand the driv- ers of reported course satisfaction for undergraduate business school (BS) students, particularly teaching, assessment, and feedback-related ones; and (b) to explore whether and in what ways these determinants differ from stu- dents taking nonbusiness school (NBS) subjects. First point is of growing practical importance to all BSs (and universities)—not just in the United Kingdom. The United Kingdom is our focus, however, because the U.K. gov- ernment has recently introduced a Teaching Excellence Framework (hereaf- ter TEF) with the aim of making universities more accountable to students for the fees they charge. Student fee increases are to become increasingly condi- tional on meeting reported student satisfaction levels, particularly those reported in the U.K.’s National Student Survey (hereafter NSS), a compre- hensive nationwide survey sent to all undergraduates shortly after completion of their courses. Universities that perform poorly in the TEF will be unable to raise fees. Some may even see them decreased. Our analysis is of interest outside the United Kingdom because the growing marketization of higher education across the globe is placing considerable pressure on all universities to raise reported satisfaction levels.

Despite the increased elevation of student satisfaction as an influence on education delivery in U.K. universities, we still have comparatively little sys- tematic empirical evidence on what drives overall student satisfaction in BSs (or universities as a whole). To date, studies on student satisfaction tend to be found in policy-related reports and usually focus on bivariate statistical asso- ciations (Buckley, Soilemetzidis, & Hillman, 2015). Such analyses are unable

Sutherland et al. 3

to discriminate between the strongest and weakest drivers of overall reported satisfaction. Similarly, BS and university administrators have tended to take rather ad hoc, informal approaches to analyze student satisfaction data (Williams & Mindano, 2015). For example, U.K. universities have identified the quality and timeliness of assessment and feedback as receiving compara- tively low NSS scores vis a vis other questions on the survey. New approaches have therefore been put in place to improve assessment and feedback mecha- nisms in many U.K. universities, with a view to increasing satisfaction levels (Williams & Mindano, 2015). However, little is really known about how improved assessment and feedback impacts overall course satisfaction. While it may be an essential component of good pedagogic practice—what impact does it actually have on reported course quality? And how important is it when compared with other teaching-related drivers—such as the fairness of assessments, staff enthusiasm or intellectual stimulation? To explore these questions multivariate regression analysis, ideally using larger data sets, can potentially provide further insights.

Having a more informed understanding of what drives course satisfaction is important for several reasons. First, universities which crack the secret of securing high overall student satisfaction rates will, most likely, outperform others (Corduas et al., 2016). Via competitive evolutionary market-driven processes (spurred by government policy) they will become more financially successful and grow faster. The models and practices they adopt, for better or worse, will become more influential and diffuse widely. At a practical level, of course, this means better understanding what drives student satisfaction will become crucial for senior BS administrators looking to improve their institutions’ financial performance. In turn, frontline teaching staff, as they negotiate their career progression in response to the incentive structures placed before them (i.e., an increasing emphasis on reported student satisfac- tion), will become more preoccupied with satisfying student demands. Second, linked to the above but arguably much more important, it could be that some of the positive teaching-, assessment-, and feedback-related drivers of student satisfaction are in themselves antithetical to, or incompatible with, student learning and intellectual development. For example, it might be that lowering academic standards increases reported satisfaction. Or, alterna- tively, it could be that some teaching approaches, ones which genuinely are linked to student learning, actually register as being less important (or in the worst case scenario, completely unimportant) as drivers of reported satisfac- tion. Timely and detailed assessment feedback, for example, while arguably central to student learning, does not register as a significant driver of overall course satisfaction in our results (based on 1.6 million NSS responses). Will this lead to the gradual relegation of educationally sound assessment and

4 Journal of Management Education 00(0)

feedback practices in BSs? In an increasingly competitive, market-driven higher education system, might BSs and those that staff them simply become more concerned with reported student satisfaction than the genuine educa- tional development of their students? Without further systematic empirical research into the underlying drivers of student satisfaction, we cannot be cer- tain whether blindly following a consumer centric market-driven path will actually be good for longer term student learning and development.

In relation to our second research question, to establish if the determinants of satisfaction differ between BS students and those taking NBS subjects, understanding the unique features of the specific drivers of BS student satis- faction is interesting for several additional reasons. First, BSs are typically large income generating units, though they are usually integrated within fairly centralized university structures. University senior management may be drawn from other university schools and departments and may lack famil- iarity with the specific needs of BS students. Comparative analysis of reported satisfaction drivers can shed further light on the specific characteristics of BS students. Second, and again much more importantly, BS students are argu- ably at the front line of the marketization process in U.K. higher education. They are on the whole, we contend, more inclined to view their higher educa- tion degree programs as investments related to career progression and life time earnings than their NBS peers. As such, they are more likely to perceive themselves as consumers of higher education. This could influence their approach to learning and, in turn, their perceptions of educational quality. Instrumental learning, for example, which describes the idea of studying pri- marily for the sake of efficiently passing exams and gaining marketable qual- ifications—and not out of an interest or curiosity to better understand a subject—is considered common in U.K. BSs (Ottewill, 2003). So can we pick up a more consumer-driven, instrumental orientation in BS students in the U.K. NSS data? Looking at comparative differences between BS and NBS students may provide glimpses into the ways in which perceptions of educational quality may evolve in response to increased marketization of higher education.

Interestingly, our findings comparing BS and NBS teaching-, assessment-, and feedback-related drivers of overall satisfaction do suggest there are important differences between BS and NBS students: intellectual stimula- tion, for example, is a less important driver for BS students; fair assessment and clarity of explanation, by contrast, is more important. These differences appear broadly consistent with a more instrumental outlook. They raise the question of whether BS educators should simply accept this—or try and do something about it. Of additional concern, moreover, is the aforementioned finding of insignificant relationships between quality of assessment and

Sutherland et al. 5

feedback and reported satisfaction. Government policy makers in the United Kingdom may have to think more carefully about such relationships when crafting the TEF. Similarly, BS administrators and educators must consider whether blind pursuit of high student satisfaction ratings is always in the best interests of their students. If it were to relegate in importance the quality of assessment and feedback practices, it may not be.

We first outline two sets of hypotheses regarding the likely strength of teaching-, assessment-, and feedback-related drivers of BS reported student satisfaction. The first set focuses on teaching, the second on assessment and feedback. Our underlying presumption is that these drivers, in general, should be important positive drivers of satisfaction. After this, we propose three fur- ther hypotheses regarding possible differences in these teaching-, assess- ment-, and feedback-related drivers that may be found between BS and NBS students.

What Drives Reported Student Satisfaction in BS Subjects?

There is a long history of studies that empirically explore the various drivers of student satisfaction, mostly published in education-related journals. We draw from these studies, as they provide direct insights into the focus of our study. Like ours, these articles predominately use student evaluation data (Broder & Dorfman, 1994; Hearn, 1985; Krahn & Bowlby, 1997; Nadiri, Kandampully, & Hussain, 2009; Neumann & Neumann, 1981; Rienties, Li, & Marsh, 2015). Such studies have been undertaken at a number of different levels of analysis. For example, some consider evaluations of entire courses, programs or the university experience (Filak & Sheldon, 2003; Rienties et al., 2015); some module satisfaction (Broder & Dorfman, 1994; Rienties et al., 2015); others are more niche and look at determinants of curriculum satisfac- tion (Tessema, Ready, & Yu, 2012).

There is considerable research on the determinants of satisfaction in spe- cific subjects, or fields. This includes studies on drivers of satisfaction in psychology (Green, Hood, & Neumann, 2015), sports sciences (Popp, Weight, Dwyer, Morse, & Baker, 2015), music (Serenko, 2011), and also a number in BS-related courses. Indeed, we identified eight BS-related studies, making it the most studied subject area (Bennett, 2003; DeShields, Kara, & Kaynak, 2005; Douglas, Douglas, McClelland, & Davies, 2014; Hill, Lomas, & MacGregor, 2003; Letcher & Neves, 2010; Malik, Danish, & Usman, 2010; Shurden, Santandreu, & Shurden, 2016). The focus of most studies is at the undergraduate level, involving U.S.- and U.K.-based students (Bennett, 2003; Douglas et al., 2014) although other countries have been studied (e.g.,

6 Journal of Management Education 00(0)

Greece [Nadiri et al., 2009], Pakistan [Malik et al., 2010], and the UAE [Dodeen, 2016]). A central question the above studies look to address is as follows: What are most and least important drivers of overall student satis- faction with teaching? Or, as Hearn (1985) puts it in one of the earliest studies on this topic: “how do students weight the various domains of satisfaction and dissatisfaction (e.g., faculty availability, faculty teaching ability) in arriv- ing at their levels of overall program satisfaction?” (Hearn, 1985, p. 415).

Following this literature, we propose a set of hypotheses on the relative importance of the teaching-, assessment-, and feedback-related determinants of course satisfaction for BS subjects. We develop them around the eight questions found in the two general categories used in the U.K. NSS of teach- ing effectiveness and assessment and feedback (the other four categories are academic support, course organization and management, learning resources, and personal development; see NSS Questionnaire, Table 2). Moreover, as we wish to inform academics, deans of BSs and government policy makers about the relative importance of these teaching-related drivers of satisfaction, we incorporate the use of the labels “strong,” “moderate,” and “weak.” These refer to the importance of each driver as determined by their ranking posi- tions vis a vis all other drivers (i.e., explanatory variables in our model). “Strong” refers to a driver ranked in the upper quartile of all drivers, “weak” the bottom quartile and “moderate” all else in between.

Course Teaching as a Determinant of Satisfaction

Empirical research on general student satisfaction has typically (and perhaps unsurprisingly) found a strong (i.e., comparatively large coefficient in the empirical regression analysis) and statistically significant relationships between survey questions gauging various aspects of teaching quality and overall course satisfaction (DeShields et al., 2005; Hearn, 1985; Krahn & Bowlby, 1997; Letcher & Neves, 2010; Thomas & Galambos, 2004). Hearn (1985), for example, found especially strong effects “from indicators of teaching ability” (Hearn, 1985, p. 421). Subsequently, Krahn and Bowlby (1997) found teaching quality to be important: “our study demonstrates much more conclusively that the experience of good teaching translates into greater satisfaction with the overall university experience” (Krahn & Bowlby, 1997, p. 171). Green et al. (2015) confirm this viewpoint in their summary of the literature on course satisfaction: “Teaching variables, particularly teaching quality and expertise, tend to show the strongest relationships with student satisfaction” (Green et al., 2015, p. 131).

Sutherland et al. 7

Looking specifically at studies on BS student satisfaction, teaching quality similarly emerges as an important determinant (Gibson, 2010). Bennett (2003), for example, looking at satisfaction levels in one U.K. BS, confirms the “critical importance of teaching quality as a determinant of student satis- faction” (Bennett, 2003, p. 137). Deshields et al. (2005), looking at a U.S. BS, finds faculty and classes as “key factors” in influencing satisfaction (p. 137), as do Letcher and Neves (2010). In general, the literature on student satisfaction suggests teaching quality has a strong positive influence on satis- faction, which is perhaps unsurprising. What particular aspects of teaching quality, however, are most important to students? In this regard, the current literature lacks detail. The methodologies employed often use somewhat broad survey questions. Within the U.K. NSS, however, there is a compara- tively fine level of detail. There are four questions, for example, related to teaching quality (in the first section of the NSS). While we cannot be certain which aspects of teaching are most important for students, based on the find- ings of existing empirical research, we predict each of these to have a poten- tially strong positive impact on overall reported student satisfaction. This is based on the general finding of a strong positive relationship for teaching questions as a whole.

Hypothesis 1a: Staff that are good at explaining things will have a strong and positive impact on overall satisfaction with course quality for BS students. Hypothesis 1b: Staff that make the subject matter interesting will have a strong positive impact on overall satisfaction with course quality for BS students. Hypothesis 1c: Staff that are enthusiastic about what they are teaching will have a strong positive impact on overall satisfaction with course qual- ity for BS students. Hypothesis 1d: Intellectual stimulation is a strong determinant of overall satisfaction with course quality for BS students.

Impact of Assessment and Feedback on Course Satisfaction

Studies focusing on the determinants of student course satisfaction are not very clear on the impacts of assessment and feedback quality on course satis- faction. Hearn (1985), for example, has no instrument to gauge impacts of assessment and feedback. Similarly, many later studies lack coverage of assessment and feedback (Athiyaman, 1997; Broder & Dorfman, 1994). Krahn and Bowlby (1997) have a questionnaire item on feedback (“instruc- tors provided helpful feedback throughout courses”). However, they use

8 Journal of Management Education 00(0)

factor analysis to create a single generalized “teaching environment” variable (composed of nine questions). The specific impact of different elements of assessment and feedback, therefore, cannot be isolated. The first NSS survey question relates to the clarity of marking criteria. In the overall scheme of an undergraduate course we suspect this to have a rather limited impact on over- all satisfaction.

Hypothesis 2a: Clear marking criteria are a weak driver of course satis- faction for BS students.

Rienties et al. (2015), in one of few useful studies in the area of assess- ment and feedback, (but looking at module, not course-level satisfaction) found that assessment considerations were the second most important driver of overall learning satisfaction. Kandiko and Mawer (2013) using multiple focus group discussions, found that the perception of thoroughness and fair- ness in the assessment process was important to all U.K. students. We suspect these findings may translate to the course or program level and hypothesize that concern with fairness of assessments and marking processes are likely to play at least a moderate role in shaping satisfaction.

Hypothesis 2b: Fair assessment and marking arrangements are a moder- ate driver of course satisfaction for BS students.

Summative and formative assessment feedback, as mentioned, is not covered in most of the empirical studies of satisfaction determinants. Formative coursework should, in theory, strongly facilitate learning. If learning is important to the formation of satisfaction with quality it should strongly drive satisfaction. This being said, the volume of such feedback is often limited and feedback, it is further suggested, is often poorly under- stood by students (Kandiko & Mawer, 2013; Weaver, 2006). In the context of all other potential factors, we hypothesize feedback quality and timeli- ness therefore has at most a moderate impact on course satisfaction. The final three questions of section two of the U.K. NSS deal with these aspects of feedback delivery.

Hypothesis 2c: Timeliness of feedback on assessments is a moderate driver of overall course satisfaction for BS students. Hypothesis 2d: The detail of feedback on assessments is a moderate driver of overall course satisfaction for BS students. Hypothesis 2e: Feedback which helps clarify misunderstandings is a moderate driver of overall course satisfaction.

Sutherland et al. 9

How Do the Teaching, Assessment, and Feedback Drivers of Reported Student Satisfaction Vary Between BS and NBS Courses?

How do the weights on teaching-, assessment-, and feedback-related satisfac- tion drivers vary between BS and NBS students? These differences—referred to in the pedagogic literature as “field differences”—have been found to vary across different academic fields (Hearn, 1985). We now develop three hypotheses related to the potential field differences between BS and NBS related.

Within the pedagogic literature students have been thought of as adopting either a deep or a surface approach to their learning (Marton & Saljo, 1976). Deep learning involves attempting to understand underlying concepts and ideas to find meaning. It implies high levels of intellectual engagement with a subject. Rather than simply learning for extrinsic reasons (to pass tests, meet targets, and gain qualifications) deep learners are motivated by intrinsic rea- sons such as a desire to find enlightenment via improved conceptual under- standing (Entwistle & Tate, 1990; Lucas & Myer, 2005). So called instrumental learning has some similarities to surface and strategic learning (Dyer & Hurd, 2016; Prosser & Trigwell, 1999) but is more focused on desired outcomes, namely to attain a good degree (Ottewill, 2003). Some evidence suggests stu- dents have a preference toward BS-related subjects for extrinsic reasons. For example, to improve starting salary prospects by possessing a good degree from a good university. To do so, it has been suggested, they may be more prone to adopting an approach that is focused on achieving grades rather than mastering the subject (Koris, Ortenblad, & Ojala, 2016; Neves & Hillman, 2016; Ottewill & McFarlane, 2003). A lot of management learning literature focuses on the unique characteristics of BS students (Wang, Malhotra, & Murnighan, 2012). In particular, BS students are considered more strongly driven by self-interest and personal gain than other students (Arieli, Sagiv, & Cohen-Shalem, 2015). As such, a tendency toward “instrumental” learning has been identified in the BS context (Ottewill & MacFarlane, 2003; Rynes, Lawson, Ilies, & Trank, 2003). Thus, a starting point for developing hypotheses on the differences between drivers of satisfaction in BS and NBS students is that the former are, on the whole, more likely than the latter to adopt an instru- mental approach to their studies than the rest of the general U.K. student popu- lation (Ottewill, 2003; Ottewill & MacFarlane, 2003). This, in turn, may shape their perception of teaching quality.

It is suggested instrumental learners show “antipathy towards subjects that are not self-evidently relevant or make considerable intellectual demands” (Ottewill, 2003, p. 189). Looking at specific teaching items on the U.K. NSS

10 Journal of Management Education 00(0)

Questionnaire (see Table 3), we might predict BS students may be less con- cerned with intellectual stimulation when taking their degree programs (Question 4).

Hypothesis 3a: Intellectual stimulation is a weaker driver of satisfaction in BS than NBS students.

Instrumental learners also have “a high degree of dependence on tutors” (Ottewill, 2003, p. 189). We might also predict BS students to be more con- cerned with receiving clear, practical instructions about how to cover course materials and successfully complete their course. This is because they may prefer being given solutions or answers to questions rather than discovering and creating meaning for themselves. We hypothesize, therefore, that BS stu- dents place a higher premium on clear explanations (NSS Question 1) but attach lesser importance to intellectual stimulation.

Hypothesis 3b: Clarity of explanation is a stronger driver of satisfaction in BS than NBS students.

BS students may wish to obtain knowledge of how to do business and gain qualifications that can lead to employment or better business opportunities. An overriding purpose of attending university is to achieve a positive out- come, namely a good degree which may lead to a good job. This instrumental approach, it has been suggested, leads to: “an unhealthy preoccupation with summative assessment” in BS students (Ottewill, 2003, p. 189). As a result, their sensitivity to assessment processes may well be more acute than stu- dents studying other subjects. This leads to our final hypothesis.

Hypothesis 3c: Fair assessment and marking is a stronger driver of stu- dent satisfaction in BS than NBS students.

Method

Following similar approaches used in earlier student satisfaction studies, regression analysis was employed to explore the statistical significance as well as the relative magnitudes of student satisfaction determinants (Hearn, 1985; Krahn & Bowlby, 1997; Nadiri et al., 2009; Rienties et al., 2015; Tessema et al., 2012). We use ordinary least squares (OLS) and include all 21 items from the six U.K. NSS categories, including eight questions on teach- ing, assessment and feedback as explanatory variables. By doing so, we can attempt to decompose the impacts of specific explanatory variables, following

Sutherland et al. 11

the approach used by others (Hearn, 1985; Krahn & Bowlby, 1997). We do not, therefore, initially look to employ factor analysis for the purpose of creat- ing composite variables (for further exploration of the data, however, we do— see later Discussion section). An advantage of this approach is that is allows us to explore in more specific detail individual drivers of satisfaction.

We use pooled data from 5 years of the U.K. NSS (2012-2016). We focus on all full-time students.1 We used the averages of all 22 NSS questions for course-level responses for all completed student responses undertaken at an institutional level. These items are ordered into six NSS general categories (see Tables 2 and 3). The questions use a 5-point Likert-type scale (1 = strongly disagree, 5 = strongly agree) and are only publishable if there are at least 10 responses with a response rate of greater than or equal to 50% for each course. The NSS involves approximately 275 U.K. higher education institutions annually reporting around 4,000 final average course subject- level evaluations at the Joint Academic Coding System (JACS) subject level two.2 We use the JAC Level 2 level of disaggregation as it allows us to iden- tify all institutions offering BS-related subjects. Here, we use the categories of “business” (JAC Code 25), “management” (26), “economics” (19), “finance and accounting” (27), and “tourism, transport, travel, and others in business and administrative studies” (28) to represent BS-related courses (i.e., subjects often taught within BSs). Of the 20,054 institutional responses reported over the 5-year period, 2,887 were BS-related courses. We converted the reported percentage shares of respondents to the 22 standard questions (using the 1-5 Likert-type scale) of the survey into a final average figure, ranging from 1 to 5 (for each of the 22 questions). Thus, for each variable an average score for each course by institution, ranging from theoretical mini- mum of 1 to a maximum of 5, was obtained (Table 3).

Our dependent variable, similar to Lenton’s (2015) study, is NSS Question 22, “Overall, I am satisfied with the quality of the course,” averaged for each course (at JAC Level 2) by each institution. Independent variables included in our study are NSS Questions 1 to 21 (see Table 3) plus year dummy vari- ables and a BS-related subject dummy variable. Additionally, following Hearn’s (1985) standard econometric approach for testing differences between coefficients, BS interaction dummy variables are introduced. The BS dummy is classified as one if it falls into JAC Level 2 Categories 19, 25, 26, 27, or 28. We do not standardize the data as in other studies (Broder & Dorfman, 1994; Hearn, 1985), as all variables use identical Likert scales. We run the model using the BS Sample (1), the NBS Sample (2), and the com- bined full Sample (3). Using the business-related subject dummy variable we then create a further 21 dummy interaction terms for each of the explanatory variables and introduce them (labelled as “Interactions” in Table 2) along

12 Journal of Management Education 00(0)

with the intercept dummy in the full sample. This allows us to statistically test for “field differences” between the magnitudes of the different coeffi- cients on each of the explanatory variables for BS and NBS groups (Hearn, 1985). If the interaction coefficient is significant, it suggests the impact the given explanatory variable differs between BS and NBS groups. We drop insignificant interaction terms, testing them individually and finally as a group simultaneously.

As noted, for our first and second group of hypotheses, we classify catego- ries as “strong” if they are in the upper quartile by coefficient ranking or “weak” if in the lower quartile by rank. “Moderate” lies in between (see Table 6).

Likert-Type Scales and Use of OLS

The question of whether the sample averages of Likert-type scale responses can be meaningfully employed using OLS regression analysis is debated. Ideally, of course, we would use ordered logit modelling using the 1.6 million individual student responses. These data, however, are not publicly available. On the one hand, some argue parametric tests cannot be used on Likert-type scales or their averages, as the underlying responses are nonparametric, based as they are on ordinal, not interval, data (Jamieson, 2004). On the other hand, however, it has been forcefully argued that such critics misunderstand para- metric testing and that OLS can be employed on Likert averages. Nonnormality and skewness typical with Likert data, for example, are not an issue: paramet- ric statistics assume normality in distribution of sample means, following the Central Limit Theorem, not the data itself. In practice, moreover, it is found Pearson correlation is “robust with respect to skewness and non-normality” (Norman, 2010, p. 629). Converting ordinal data to interval data, via, for example, the addition of different ordinal responses (as we do) is, moreover, theoretically justifiable (Norman, 2010). Norman (2010) concludes

Parametric statistics can be used with Likert data, with small sample sizes, with unequal variances, and with non-normal distributions with no fear of “coming to the wrong conclusion.” These findings are consistent with empirical literature dating back nearly 80 years. (p. 631)

In short, the use of OLS on averages of Likert-type scales is commonly used across a broad range of academic disciplines and there is theoretical and practical justification for it (i.e., the results are reliable). Recently, for exam- ple, Lenton (2015) uses a similar dependent variable. By using this approach, we are able to draw from a much larger student population (1.6 million stu- dent responses) and from a much broader range of universities than any

Sutherland et al. 13

previous studies. In BS-specific studies, for example, Deshields et al. (2005) used 143 student questionnaires (years not stated, U.S.-based students); Letcher and Neves (2010) 352 (between 2004 and 2008, U.S. undergradu- ates); Bennett (2003) 377 (U.K. undergraduates); and Malik et al. (2010) 240 (Pakistan-based students). To date, therefore, in total around 1,100 student responses taken from different countries in different time periods have ana- lyzed drivers of BS student satisfaction. By contrast, our total sample consists of 245,469 BS student responses which we compare against over 1 million NBS responses (see Table 1).

Diagnostic and Robustness Tests

Our data exhibit some of the issues commonly encountered with Likert data (i.e., positive skewness, Tables 2 and 3). We, therefore, undertake a series of additional tests. This includes, first, use of quantile regression analysis, suggested as one suitable approach for data with skewed distribu- tions. Second, we Winsorized our data at the 5% level (to remove outliers causing skewness). All results remained basically unchanged and consis- tent with our original OLS estimates. Pairwise correlations are given in Table 4.

Furthermore, visual analysis of the predicted error terms (via histograms) suggests the normality assumption is met, albeit heteroscedasticity may be present. We addressed this issue by using robust standard errors as well as

Table 1. Number of NSS Responses by Business School-Related Topics at JAC Level 2, 2012-2016.

JAC Subject 2012 2013 2014 2015 2016 Total

19 Economics 6,875 7,293 8,196 8,011 8,187 38,562 25 Business 16,696 17,969 19,489 19,013 18,723 91,890 26 Management 7,881 8,433 9,248 9,349 9,733 44,644 27 Accounting and

finance 8,405 9,476 10,479 10,455 10,654 49,469

28 Tourism, etc. 5,396 5,716 6,203 5,517 5,572 28,404 Total BS responses 45,253 41,594 53,615 52,345 52,869 245,676 All All responses, BS

+ NBS 291,987 312,940 334,610 341,824 324,633 1,605,994

BS as % of BS + NBS

15.5% 13.3% 16% 15.3% 16.3% 15.3%

Note. NSS = National Student Survey; JAC = Joint Academic Coding; BS = business school; NBS = nonbusiness school courses.

14

T ab

le 2

. O

LS R

eg re

ss io

n R

es ul

ts , D

ep en

de nt

V ar

ia bl

e: “

O ve

ra ll,

I A

m S

at is

fie d

W ith

t he

Q ua

lit y

of t

he C

ou rs

e. ”

N SS

q ue

st io

ns BS

N BS

BS +

N BS

, f ul

l s am

pl e

In te

ra ct

io ns

T ea

ch in

g (1

-4 )

1 . S

ta ff

ar e

go od

a t

ex pl

ai ni

ng t

hi ng

s 0.

14 4*

** (

7. 38

) 0.

06 94

** *

(8 .4

3) 0.

08 10

** *

(1 0.

69 )

0. 07

34 **

( 3.

23 )

2 . S

ta ff

ha ve

m ad

e th

e su

bj ec

t in

te re

st in

g 0.

04 83

** (

2. 73

) 0.

08 78

** *

(1 1.

54 )

0. 08

22 **

* (1

1. 78

) −

0. 03

85 (

− 1.

86 )

3 . S

ta ff

ar e

en th

us ia

st ic

a bo

ut w

ha t

th ey

a re

te

ac hi

ng 0.

03 68

* (2

.3 8)

0. 03

73 **

* (5

.5 4)

0. 03

37 **

* (5

.4 7)

− 0.

00 08

97 (

− 0.

05 )

4 . T

he c

ou rs

e is

in te

lle ct

ua lly

s tim

ul at

in g

0. 18

1* **

( 14

) 0.

24 2*

** (

45 .4

4) 0.

23 2*

** (

48 .1

6) −

0. 06

08 **

* (−

4. 04

) A

ss es

sm en

t an

d fe

ed ba

ck (

5- 9)

5 . T

he c

ri te

ri a

us ed

in m

ar ki

ng h

av e

be en

cl

ea r

in a

dv an

ce −

0. 02

65 *

(− 2.

42 )

− 0.

00 92

9* (

− 2.

06 )

− 0.

01 09

** (

− 2.

61 )

− 0.

01 7

(− 1.

33 )

6 . A

ss es

sm en

t ar

ra ng

em en

ts a

nd m

ar ki

ng

ha ve

b ee

n fa

ir 0.

10 7*

** (

9) 0.

04 49

** *

(8 .7

7) 0.

05 23

** *

(1 1.

1) 0.

06 17

** *

(4 .4

4)

7 . F

ee db

ac k

on m

y w

or k

ha s

be en

p ro

m pt

− 0.

01 38

( −

1. 50

) −

0. 00

30 8

(− 0.

95 )

− 0.

00 43

( −

1. 41

) −

0. 01

01 (

− 0.

96 )

8 . I

h av

e re

ce iv

ed d

et ai

le d

co m

m en

ts o

n m

y w

or k

0. 01

13 (

0. 91

) 0.

00 78

9 (1

.5 3)

0. 00

83 6

(1 .7

6) 0.

00 28

1 (0

.1 9)

9 . F

ee db

ac k

on m

y w

or k

ha s

he lp

ed m

e cl

ar ify

t hi

ng s

I d id

n ot

u nd

er st

an d

− 0.

01 17

( −

0. 85

) −

0. 03

20 **

* (−

5. 36

) −

0. 03

02 **

* (−

5. 51

) 0.

02 16

( 1.

35 )

A ca

de m

ic s

up po

rt (

10 -1

5) 10

. I h

av e

re ce

iv ed

s uf

fic ie

nt a

dv ic

e an

d su

pp or

t w

ith m

y st

ud ie

s 0.

10 0*

** (

6. 17

) 0.

15 3*

** (

21 .1

) 0.

14 7*

** (

22 .2

) −

0. 05

25 **

( −

2. 75

)

11 . I

h av

e be

en a

bl e

to c

on ta

ct s

ta ff

w he

n I

ne ed

ed t

o −

0. 00

24 6

(− 0.

20 )

0. 00

91 5

(1 .7

3) 0.

00 75

4 (1

.5 5)

− 0.

01 19

( −

0. 82

)

12 . G

oo d

ad vi

ce w

as a

va ila

bl e

w he

n I n

ee de

d to

m ak

e st

ud y

ch oi

ce s

0. 02

99 (

1. 94

) 0.

01 68

* (2

.4 7)

0. 01

75 **

( 2.

81 )

0. 01

53 (

0. 85

)

13 . T

he t

im et

ab le

w or

ks e

ffi ci

en tly

a s

fa r

as

m y

ac tiv

iti es

a re

c on

ce rn

ed −

0. 06

59 **

* (−

7. 58

) −

0. 04

13 **

* (−

10 .6

6) −

0. 04

53 **

* (−

12 .7

7) −

0. 02

38 *

(− 2.

33 )

(c on

tin ue

d)

15

N SS

q ue

st io

ns BS

N BS

BS +

N BS

, f ul

l s am

pl e

In te

ra ct

io ns

14 . C

ha ng

es in

t he

c ou

rs e

or t

ea ch

in g

ha ve

be

en c

om m

un ic

at ed

e ffe

ct iv

el y

− 0.

01 38

( −

1. 18

) −

0. 01

55 **

( −

3. 19

) −

0. 01

65 **

* (−

3. 69

) 0.

00 23

( −

0. 17

)

15 . T

he c

ou rs

e is

w el

l o rg

an iz

ed a

nd is

ru

nn in

g sm

oo th

ly 0.

32 3*

** (

27 .3

) 0.

32 0*

** (

67 .1

) 0.

32 3*

** (

74 .3

) 0.

00 01

86 (

− 0.

01 )

Le ar

ni ng

r es

ou rc

es (

16 -1

8) 16

. T he

li br

ar y

re so

ur ce

s an

d se

rv ic

es a

re

go od

e no

ug h

fo r

m y

ne ed

s 0.

04 56

** *

(4 .8

1) 0.

03 97

** *

(1 0.

95 )

0. 04

09 **

* (1

2. 09

) 0.

00 55

8 (0

.5 1)

17 . I

h av

e be

en a

bl e

to a

cc es

s ge

ne ra

l I T

re

so ur

ce s

w he

n I n

ee de

d to

− 0.

00 72

4 (−

0. 56

) −

0. 00

82 6

(− 1.

65 )

− 0.

01 00

* (−

2. 15

) 0.

00 28

6 (−

0. 19

)

18 . A

bl e

to a

cc es

s sp

ec ia

liz ed

e qu

ip m

en t,

fa ci

lit ie

s, o

r ro

om s

w he

n ne

ed ed

− 0.

01 93

( −

1. 48

) 0.

01 97

** *

(4 .1

9) 0.

01 63

** *

(3 .6

9) −

0. 03

83 *

(− 2.

55 )

Pe rs

on al

d ev

el op

m en

t (1

9- 21

) 19

. T he

c ou

rs e

ha s

he lp

ed m

e pr

es en

t m

ys el

f w

ith c

on fid

en ce

0. 12

5* **

( 6.

35 )

0. 11

5* **

( 13

.5 9)

0. 11

8* **

( 15

.1 3)

0. 00

88 4

(0 .3

8)

20 . M

y co

m m

un ic

at io

n sk

ill s

ha ve

im pr

ov ed

0. 04

27 *

(2 .2

8) 0.

01 35

( 1.

76 )

0. 01

99 **

( 2.

81 )

0. 02

93 (

1. 35

) 21

. A s

a re

su lt

of t

he c

ou rs

e, I

fe el

c on

fid en

t in

t ac

kl in

g un

fa m

ili ar

p ro

bl em

s 0.

13 5*

** (

6. 57

) 0.

13 1*

** (

15 .5

) 0.

13 1*

** (

16 .7

) 0.

00 48

2 (0

.2 )

yr 20

13 −

0. 01

04 (

− 1.

62 )

− 0.

00 47

7 (−

1. 65

) −

0. 00

54 8*

( −

2. 07

) −

0. 00

55 9*

( −

2. 11

) yr

20 14

− 0.

00 40

8 (−

0. 63

) −

0. 00

93 0*

* (−

3. 19

) −

0. 00

84 6*

* (−

3. 17

) −

0. 00

85 8*

* (−

3. 22

) yr

20 15

− 0.

02 59

** *

(− 4.

12 )

− 0.

02 96

** *

(− 10

.6 1)

− 0.

02 94

** *

(− 11

.4 7)

− 0.

02 91

** *

(− 11

.3 9)

yr 20

16 −

0. 02

83 **

* (−

4. 40

) −

0. 03

36 **

* (−

11 .7

4) −

0. 03

31 **

* (−

12 .6

3) −

0. 03

29 **

* (−

12 .5

5) _c

on s

− 0.

60 9*

** (

− 11

.6 5)

− 0.

72 4*

** (

− 32

.4 5)

− 0.

70 5*

** (

− 34

.5 1)

− 0.

72 3*

** (

− 32

.8 5)

Bu si

ne ss

s ch

oo l d

um m

y —

— —

0. 10

4 (1

.7 1)

N 2,

88 7

17 ,1

67 20

,0 54

20 ,0

54 A

dj us

te d

R2 .8

83 .8

92 .8

91 .8

91

N ot

e. O

LS =

o rd

in ar

y le

as t

sq ua

re s;

B S

= b

us in

es s

sc ho

ol ; N

BS =

n on

bu si

ne ss

s ch

oo l c

ou rs

es .

*p <

0 .0

5, *

*p <

0 .0

1, *

** p

< 0

.0 01

.

T ab

le 2

. ( co

nt in

ue d)

16 Journal of Management Education 00(0)

Table 3. NSS Questions and Their Descriptive Statistics.

NSS questions M SD Min Max

The teaching on my course 1 Staff are good at explaining things 4.17 0.24 2.04 5 2 Staff have made the subject interesting 4.07 0.28 1.85 5 3 Staff are enthusiastic about what they are teaching 4.28 0.28 1.97 5 4 The course is intellectually stimulating 4.19 0.31 2.25 5

Assessment and feedback 5 The criteria used in marking have been clear in

advance 3.98 0.32 1.89 5

6 Assessment arrangements and marking have been fair

3.98 0.3 1.86 5

7 Feedback on my work has been prompt 3.73 0.43 1.34 5 8 I have received detailed comments on my work 3.87 0.4 1.76 5 9 Feedback on my work has helped me clarify things I

did not understand 3.78 0.37 1.68 5

Academic support 10 I have received sufficient advice and support with

my studies 4.04 0.29 2.13 5

11 I have been able to contact staff when I needed to 4.26 0.29 1.9 5 12 Good advice was available when I needed to make

study choices 4.05 0.29 1.92 5

Organization and management 13 The timetable works efficiently as far as my

activities are concerned 4.09 0.33 1.69 5

14 Any changes in the course or teaching have been communicated effectively

3.98 0.41 1.38 5

15 The course is well organized and is running smoothly

3.91 0.46 1.22 5

Learning resources 16 The library resources and services are good enough

for my needs 4.18 0.38 1.65 5

17 I have been able to access general IT resources when I needed to

4.26 0.31 1.77 5

18 I have been able to access specialized equipment, facilities, or rooms when I needed to

4.11 0.33 1.64 5

Personal development 19 The course has helped me present myself with

confidence 4.14 0.26 1.89 5

20 My communication skills have improved 4.27 0.25 2.15 5 21 As a result of the course, I feel confident in tackling

unfamiliar problems 4.17 0.25 2 5

Overall satisfaction 22 Overall, I am satisfied with the quality of the course 4.16 0.34 1.54 5

Note. NSS = National Student Survey.

17

T ab

le 4

. Pa

ir w

is e

C or

re la

tio ns

.

Q 1

Q 2

Q 3

Q 4

Q 5

Q 6

Q 7

Q 8

Q 9

Q 10

Q 11

Q 12

Q 13

Q 14

Q 15

Q 16

Q 17

Q 18

Q 19

Q 20

Q 21

Q 22

Q 1

1

Q 2

.8 2

1

Q 3

.7 9

.8 6

1

Q 4

.7 2

.7 6

.7 1

1

Q 5

.6 1

.4 9

.4 8

.3 7

1

Q 6

.6 6

.5 6

.5 5

.4 9

.7 2

1

Q 7

.5 7

.5 2

.5 3

.4 4

.6 1

.6 2

1

Q 8

.5 9

.6 .5

9 .3

6 .6

1 .6

.7 2

1

Q 9

.6 4

.6 3

.5 9

.4 4

.6 5

.7 .7

2 .8

8 1

Q

10 .7

8 .7

2 .7

2 .6

1 .6

4 .6

9 .6

1 .6

2 .7

1 1

Q

11 .6

7 .5

8 .6

2 .5

9 .4

7 .5

6 .5

2 .4

2 .4

9 .7

4 1

Q

12 .7

4 .7

.6 9

.5 9

.6 .6

5 .6

.6 .6

8 .8

7 .7

5 1

Q

13 .5

4 .4

8 .4

7 .4

8 .4

2 .4

8 .3

9 .3

7 .4

1 .5

2 .5

1 .5

1

Q 14

.6 1

.4 9

.5 3

.5 7

.4 9

.5 3

.5 .3

6 .4

2 .5

9 .6

5 .5

8 .7

1 1

Q

15 .6

8 .5

5 .5

8 .6

4 .5

3 .5

7 .5

3 .3

9 .4

4 .6

3 .6

8 .6

.6 9

.8 9

1

Q 16

.1 9

.1 6

.1 9

.2 8

.1 3

.1 7

.1 9

.0 1

.0 9

.2 3

.2 9

.2 5

.1 8

.3 1

.3 1

Q

17 .2

4 .2

1 .2

2 .3

2 .1

8 .2

1 .2

2 .0

6 .1

5 .3

.3 4

.3 1

.2 2

.3 2

.3 2

.7 6

1

Q 18

.3 6

.3 2

.3 4

.4 2

.2 7

.2 9

.2 9

.1 2

.2 1

.4 .4

4 .4

2 .3

2 .4

4 .4

5 .7

1 .7

8 1

Q

19 .6

5 .6

6 .6

1 .5

8 .5

4 .4

9 .4

7 .4

9 .5

4 .6

9 .5

1 .6

8 .4

1 .4

4 .4

9 .2

6 .3

1 .4

1

Q 20

.5 6

.5 8

.5 5

.5 3

.4 4

.3 6

.3 9

.3 9

.4 3

.5 8

.4 4

.5 8

.3 3

.3 7

.4 1

.2 7

.3 .4

.8 7

1

Q 21

.6 5

.6 4

.6 1

.6 5

.4 9

.4 9

.4 7

.4 2

.5 1

.6 8

.5 4

.6 7

.4 1

.4 8

.5 3

.3 1

.3 6

.4 6

.8 7

.8 4

1

Q 22

.8 1

.7 5

.7 4

.8 .5

7 .6

4 .5

7 .4

9 .5

6 .7

8 .7

1 .7

4 .5

9 .7

4 .8

3 .3

4 .3

7 .5

.7 1

.6 3

.7 4

1

N ot

e. Q

= N

at io

na l S

tu de

nt S

ur ve

y qu

es tio

ns .

18 Journal of Management Education 00(0)

employing a number of other remedial approaches (i.e., logarithmic transfor- mations), to explore the robustness of our results. We tested the degree of multicollinearity between explanatory variables using variance inflation fac- tors (VIF; with maximum values of 6). Owing to the relatively large sample size and relatively low VIF results, we do not consider multicollinearity to be problematic to the interpretation of our results.

Omitted variables could potentially bias our estimates. The adjusted R2 in our model, however, at around .9, is very high: about 90% of the variance in satisfaction is explained by our explanatory variables. This is considerably higher than that found in similar previous satisfaction studies, which vary between .4 and .6. While it is possible we have omitted other important explanatory variables form our model, we think this improbable given its high overall explanatory power (based on the comprehensive 21 questions from the U.K. NSS). It could be that such things as course size influence satisfaction, or the prestige of the university (e.g., if it is a research focused Russell Group university in the United Kingdom) influence satisfaction. We run models with these additional explanatory variables but find them all insignificant.3

Results

Results Related to Overall Drivers of Satisfaction

Course Teaching (Hypotheses 1a, 1b, 1c, and 1d). Course teaching, perhaps unsurprisingly, is an important category driving overall satisfaction. The cumulative sum of the significant coefficients for Questions 1 to 4 of the NSS Questionnaire, for example, sum to 0.41 for BS courses (and 0.44 for NBS courses [Table 5]). All coefficients are significant (at the 5% level and above) and many highly so (at the 0.1% level). The combined impact of teaching (coefficients on Questions 1-4) is considerably larger than for any of the other five remaining categories (i.e., assessment, academic support, organi- zation, and management, learning resources and personal development, see Table 5). The second strongest category, for example, is “personal develop- ment” (0.27), followed closely by organization and management (0.26).

Although the coefficients on the teaching-related questions are positive and significant, they do not all, however, register as being “strong” drivers when ranked against the other explanatory variables in the model. In fact, only Hypothesis 1a and Hypothesis 1d are supported, albeit the drivers on Hypothesis1b (“staff have made the subject interesting” and Hypothesis 1c (“staff are enthusiastic about what they are teaching”) are still moderate driv- ers (and both statistically significant).

Sutherland et al. 19

Assessment and Feedback (Hypotheses 2a, 2b, 2c, 2d, and 2e). Interestingly, the assessment and feedback category as a whole in the U.K. NSS population appears to have a relatively weak impact (the combined coefficients, for example, sum to 0.011). The category, however, conceals considerable varia- tion in the coefficients. Care with interpretation is also required. Fair assess- ments and marking, for example, have a moderate impact on overall course satisfaction in the BS sample, supporting Hypothesis 1b. Feedback, however, appears to have limited impact (NSS Questions 7, 8, and 9). Hypothesis 1c, proposing a moderate impact, is therefore not supported. Similarly, Hypoth- eses 2d and 2e are not supported: neither detail of feedback nor feedback clarifying thinking are strong drivers of satisfaction.

Differences in Teaching, Assessment, and Feedback-Related Drivers in BS and NBS Samples (Hypotheses 3a, 3b, and 3c). Table 2 shows that for Question 1 on the NSS the BS coefficient is significantly larger, by 0.075 at the 1% significance level, for BS students. For Question 4, by contrast, it is significantly lower, by -0.06 at the 0.1% significance level. BS students are less concerned about “intellectual stimulation”. Rather, clarity of explanations is more important. This supports Hypotheses 3a and 3b. Fair assessment and marking, moreover, is a stronger driver of student satisfaction in BS than NBS students. For BS students the impact of Question 6 (“Assessment arrangements and marking have been fair”) on overall perception of quality is considerably higher than

Table 5. Sums of the Significant Coefficients Reported for the Six NSS Categories for BS/NBS Students.

BS courses NBS courses BS and NBS courses

Teaching (Questions 1-4) 0.41 0.437 0.429 Assessment and feedback

(Questions 5-9) 0.0805 0.0036 0.011

Academic support (Questions 10-12)

0.1 0.165 0.17

Organization and management (Questions 13-15)

0.257 0.263 0.261

Learning resources (Questions 16-18)

0.045 0.059 0.047

Personal development (Questions 19-21)

0.30 0.25 0.269

Note. NSS = National Student Survey; BS = business school; NBS = nonbusiness school courses. Source. Table 2.

20 Journal of Management Education 00(0)

for NBS students (0.1 compared with 0.05, almost double), supporting Hypothesis 3c. Question 6 ranks as the as the sixth most important determi- nant of satisfaction for BS students. By contrast, for NBS students it ranks eighth (Table 6).

Discussion

We first consider our broader findings regarding the main drivers of reported student satisfaction for BS students within the U.K. NSS survey as a whole. We then discuss the significance of our findings regarding differences in the drivers of reported satisfaction with quality for BS vis a vis NBS students.

The Central Importance of Clarity of Explanation, Intellectual Stimulation, and Organization

In some ways, it is reassuring to find that the most highly ranked drivers of satisfaction in the U.K. student undergraduate population are teaching related. Most students still perceive direct contact teaching time as one of the main benefits higher education has to offer (albeit ideas of exactly what constitutes teaching quality may vary between BS and NBS students).4 These findings are broadly consistent with earlier research on student satisfaction (Broder & Dorfman, 1994; Hearn, 1985; Letcher & Neves, 2010; Thomas & Galambos, 2004). When we dig deeper into which aspects of teaching drive satisfaction, we find intellectual stimulation still registers very highly, in both BS (2nd place) and NBS students (also 2nd). Perception of course quality is strongly related to intellectual stimulation and clarity of explanation. These findings are positive, in so far as they suggest overall student satisfaction is linked to features of university teaching that we would expect also to be important for learning.

Interestingly, NSS Question 15 “the course is well organized and is run- ning smoothly” (in the section “Course organization and management” of the NSS survey) registers as the strongest driver of satisfaction for BS students (Table 6). This raises a further question: Does the question mostly capture the administrative side of course organization and management, or that involving interaction in classes with teaching staff? There are several pieces of evi- dence pointing toward the latter interpretation. First, some other items in the organization and management group more associated with the administrative side of course management (i.e., timetabling scheduling, communications regarding course changes) show no positive relationship with satisfaction (and even negative ones, Table 2). Second, additional factor analysis of the

21

T ab

le 6

. R

an ki

ng o

f D ri

ve rs

o f S

at is

fa ct

io n

in B

S an

d N

BS S

ub je

ct s.

N SS

Q ue

st io

ns BS

N SS

Q ue

st io

ns N

BS

15 . T

he c

o ur

se is

w el

l o rg

an iz

ed a

nd is

ru

nn in

g sm

o o

th ly

0. 32

** *

15 . T

he c

ou rs

e is

w el

l o rg

an iz

ed a

nd is

r un

ni ng

sm

oo th

ly 0.

32 **

*

4. T

he c

o ur

se is

in te

lle ct

ua lly

s ti

m ul

at in

g 0.

18 **

* 4.

T he

c o

ur se

is in

te lle

ct ua

lly s

ti m

ul at

in g

0. 24

** *

1. S

ta ff

a re

g o

o d

at e

xp la

in in

g th

in gs

0. 14

** *

10 . I

h av

e re

ce iv

ed s

uf fic

ie nt

a dv

ic e

an d

su pp

o rt

w it

h m

y st

ud ie

s 0.

15 **

*

21 . A

s a

re su

lt o

f t he

c o

ur se

, I fe

el c

o nf

id en

t in

t ac

kl in

g un

fa m

ili ar

p ro

bl em

s 0.

14 **

* 21

. A s

a re

su lt

of t

he c

ou rs

e, I

fe el

c on

fid en

t in

ta

ck lin

g un

fa m

ili ar

p ro

bl em

s 0.

13 **

*

19 . T

he c

o ur

se h

as h

el pe

d m

e pr

es en

t m

ys el

f w

it h

co nf

id en

ce 0.

13 **

* 19

. T he

c ou

rs e

ha s

he lp

ed m

e pr

es en

t m

ys el

f w ith

co

nf id

en ce

0. 12

** *

6 . A

ss es

sm en

t ar

ra ng

em en

ts a

nd m

ar ki

ng

ha ve

b ee

n fa

ir 0.

11 **

* 2.

S ta

ff ha

ve m

ad e

th e

su bj

ec t

in te

re st

in g

0. 08

8* **

10 . I

h av

e re

ce iv

ed s

uf fic

ie nt

a dv

ic e

an d

su pp

o rt

w it

h m

y st

ud ie

s 0.

1* **

1. S

ta ff

a re

g o

o d

at e

xp la

in in

g th

in gs

0. 07

** *

13 . T

he t

im et

ab le

w o

rk s

ef fic

ie nt

ly a

s fa

r as

m

y ac

ti vi

ti es

a re

c o

nc er

ne d

− 0.

06 7*

** 6.

A ss

es sm

en t

ar ra

ng em

en ts

a nd

m ar

ki ng

ha

ve b

ee n

fa ir

0. 04

5* **

2 . S

ta ff

h av

e m

ad e

th e

su bj

ec t

in te

re st

in g

0. 04

8* *

13 . T

he t

im et

ab le

w o

rk s

ef fic

ie nt

ly a

s fa

r as

m

y ac

ti vi

ti es

a re

c o

nc er

ne d

−0 .0

41 **

*

16 . T

he li

br ar

y re

so ur

ce s

an d

se rv

ic es

a re

go

o d

en o

ug h

fo r

m y

ne ed

s 0.

04 6*

** 16

. T he

li br

ar y

re so

ur ce

s an

d se

rv ic

es a

re g

oo d

en ou

gh fo

r m

y ne

ed s

0. 03

8* **

20 . M

y co

m m

un ic

at io

n sk

ill s

ha ve

im pr

o ve

d 0.

04 27

* 3.

S ta

ff ar

e en

th us

ia st

ic a

bo ut

w ha

t th

ey a

re t

ea ch

in g

0. 03

7* **

3. S

ta ff

a re

e nt

hu si

as ti

c ab

o ut

w ha

t th

ey a

re

te ac

hi ng

0. 03

68 *

9. F

ee db

ac k

on m

y w

or k

ha s

he lp

ed m

e cl

ar ify

t hi

ng s

I d id

n ot

u nd

er st

an d

− 0.

03 2*

**

(c on

tin ue

d)

22

N SS

Q ue

st io

ns BS

N SS

Q ue

st io

ns N

BS

5. T

he c

ri te

ri a

us ed

in m

ar ki

ng h

av e

be en

cl

ea r

in a

dv an

ce −

0. 02

65 *

18 . A

bl e

to a

cc es

s sp

ec ia

liz ed

e qu

ip m

en t,

fa ci

lit ie

s,

or r

oo m

s w

he n

ne ed

ed 0.

01 2*

**

12 . G

o o

d ad

vi ce

w as

a va

ila bl

e w

he n

I ne

ed ed

to

m ak

e st

ud y

ch o

ic es

0. 02

99 12

. G oo

d ad

vi ce

w as

a va

ila bl

e w

he n

I n ee

de d

to

m ak

e st

ud y

ch oi

ce s

0. 01

7*

8. I

h av

e re

ce iv

ed d

et ai

le d

co m

m en

ts o

n m

y w

o rk

0. 01

13 14

. C ha

ng es

in t

he c

ou rs

e or

t ea

ch in

g ha

ve b

ee n

co m

m un

ic at

ed e

ffe ct

iv el

y −

0. 01

6* *

11 . I

h av

e be

en a

bl e

to c

o nt

ac t

st af

f w he

n I

ne ed

ed t

o −

0. 00

24 6

5. T

he c

ri te

ri a

us ed

in m

ar ki

ng h

av e

be en

c le

ar in

ad

va nc

e −

0. 00

93 *

17 . I

h av

e be

en a

bl e

to a

cc es

s ge

ne ra

l I T

re

so ur

ce s

w he

n I

ne ed

ed t

o −

0. 00

72 4

20 . M

y co

m m

un ic

at io

n sk

ill s

ha ve

im pr

ov ed

0. 01

4

9. F

ee db

ac k

o n

m y

w o

rk h

as h

el pe

d m

e cl

ar ify

t hi

ng s

I di

d no

t un

de rs

ta nd

− 0.

01 17

11 . I

h av

e be

en a

bl e

to c

on ta

ct s

ta ff

w he

n I n

ee de

d to

0. 00

92

7. F

ee db

ac k

o n

m y

w o

rk h

as b

ee n

pr o

m pt

− 0.

01 38

8. I

ha ve

r ec

ei ve

d de

ta ile

d co

m m

en ts

o n

m y

w or

k 0.

00 78

9 14

. C ha

ng es

in t

he c

o ur

se o

r te

ac hi

ng h

av e

be en

c o

m m

un ic

at ed

e ff

ec ti

ve ly

− 0.

01 38

7. F

ee db

ac k

on m

y w

or k

ha s

be en

p ro

m pt

− 0.

00 30

8

18 . A

bl e

to a

cc es

s sp

ec ia

liz ed

e qu

ip m

en t,

fa

ci lit

ie s,

o r

ro o

m s

w he

n ne

ed ed

−0 .0

19 17

. I h

av e

be en

a bl

e to

a cc

es s

ge ne

ra l I

T

re so

ur ce

s w

he n

I ne

ed ed

t o

–0 .0

08 26

N ot

e. N

SS =

N at

io na

l S tu

de nt

S ur

ve y;

B S

= b

us in

es s

sc ho

ol ; N

BS =

n on

bu si

ne ss

s ch

oo l c

ou rs

es . Q

ue st

io ns

in b

ol d

hi gh

lig ht

s ta

tis tic

al ly

s ig

ni fic

an t

di ffe

re nc

es in

d ri

ve rs

o f s

at is

fa ct

io n

be tw

ee n

BS a

nd N

BS .

*p <

0 .0

5, *

*p <

0 .0

1, *

** p

< 0

.0 01

.

T ab

le 6

. ( co

nt in

ue d)

Sutherland et al. 23

21 survey items shows a strong loading on one teaching factor, with NSS Question 15 on “smooth running of courses” falling into it.

Is this finding surprising? For most students, we would argue, firsthand experience of course organization and management stems directly from their daily interaction with teaching staff (in the classroom or via academic advis- ing) rather than with administrators. A significant component of the “organi- zation and management” element captured in the NSS survey thus likely reflects the efforts of teaching staff. This further reinforces findings regarding the importance of teaching quality, suggesting that it is not just what academ- ics teach but also how they teach and manage their modules. Some existing research at the module (not course) level supports this viewpoint. Thomas and Galambos (2004), for example, have shown how teacher “preparedness” is a strong driver of satisfaction (at the module level). So, it might be reason- able to also expect a significantly positive impact of well-organized classes on course satisfaction.

Our findings additionally suggest that aspects of teaching that may be con- sidered more superficial in nature, such as an enthusiastic outward teaching demeanor, does not greatly influence satisfaction (because the coefficient on it is relatively small). The NSS data suggest that students typically value content, delivery, and organization more highly than enthusiasm, albeit enthusiasm is still not unimportant (Table 2). The high ranking of personal development as a satisfaction driver, moreover, is indicative that students recognize what they may gain from higher education. These findings are supported by earlier research. Letcher and Neves (2010), for example, identify “self-confidence” as the most important single factor explaining satisfaction in their BS sample. Thomas and Galambos (2004) also found that what most satisfied students was perceived “intellectual development” (Thomas & Galambos, 2004, p. 258).

The Limited Importance of Timely, High-Quality Assessment Feedback

While many aspects of teaching delivery, such as intellectual stimulation and clarity of explanation, act as positive drivers of satisfaction, our findings regard- ing assessment and feedback, by contrast, give reason for concern. To date, comparatively little is known about how assessment shapes student satisfaction and our findings may be surprising for some. The insignificant or marginally negative coefficients on most of the assessment-related variables suggests that promoting tighter marking turnaround deadlines, explaining upfront marking criteria more clearly, or giving more detailed feedback, may not greatly improve overall reported course satisfaction. In general, our findings imply that students are more concerned that their final mark reflects their efforts and capabilities

24 Journal of Management Education 00(0)

and is “fair,” rather than how (i.e., what feedback says) or when this mark is actually arrived at. These finding should be of some interest to U.K. government policy makers responsible for developing the TEF as well as BS administrators and educators. Receiving adequate feedback is arguably of central importance to learning processes (O’Donovan, Rust, & Price, 2016). Written work which is assessed is an important, possibly the most important means, by which students in higher education may receive critical feedback.

Interpreting these negative coefficients, of course, requires some care. Reverse causality in our model is an important consideration. It may be, for example, that those students who received feedback that has helped improve their understanding of a subject (i.e., Question 9) tend to be weaker students and those, therefore, who are (on the whole) more prone to being dissatisfied with their courses. We cannot rule out this possibility. This being said, there are also valid reasons for believing that some negative relationships may exist. In the case of Question 5 regarding clarity of assessment criteria, for example, being provided with long and detailed accounts relating to marking criteria is likely to be a distraction. Similarly, fast turnaround times (Question 7) may lead to the perception (or reality) that student coursework or assess- ments have not been properly marked. In other words, rushing to provide feedback may not be helpful in improving satisfaction with quality.

Our results point toward the need for a more thorough investigation of the impact of assessment on perceptions of education quality. High-quality feed- back is essential for learning to take place. If, however, perceived course quality is not strongly influenced by the assessment and feedback drivers we identify here, policy makers may need to think more carefully about the use of student satisfaction measures as indicators of quality teaching. If univer- sity ranking systems or policy makers use overall student satisfaction to rate educational quality, this may end up inadvertently penalizing the institutions that are those most actively engaged in best practice learning and teaching activities—that is, giving detailed and timely feedback. This is because such schools will see little benefit to their overall rankings (based on overall satis- faction), despite sacrificing considerable resources to providing high-quality assessment and feedback mechanisms.

Instrumental Learning and Reported Satisfaction in U.K. Business Schools

As noted, instrumental learners are characterized as being more extrinsically driven than other learners (i.e., they study to get a good degree and enhanced career prospects). They typically focus on attaining qualifications not master- ing the subject via “deep learning.” They, therefore, have a preference toward

Sutherland et al. 25

clear guidance during their studies. It has been suggested, for example, they may exhibit “a high degree of dependence on tutors” and by implication they are less self-directed learners (Ottewill, 2003, p. 189).

Our results do indeed show that BS students have a stronger preference for staff that can explain things well when compared with NBS students. By con- trast, while Koris et al. (2016) argue that BS students also “value and identify with intellectual curiosity, critical thinking and introspection” (p. 174), intel- lectual stimulation appears to be considerably less important to BS students than it is to NBS students. Our finding here is in line with Hearn’s (1985) early empirical analysis of field differences. He compared satisfaction drivers in six different categories and found significant differences in drivers across fields. Specifically, he found that in the general category of what he termed “enterpris- ing” majors, which included business and management studies, “course stimu- lation” was a weaker determinant than in other fields. These findings seem in keeping with a stronger instrumental profile in BS students.5 Interestingly, we also found BS students placed a considerably larger emphasis on “fair” assess- ments and marking (NSS Question 6).6 It has been suggested that instrumental learners have “an unhealthy preoccupation with summative assessment” (Ottewill 2003, p. 189). There may be some validity in this viewpoint, as our results show striking differences between BS and NBS groups in this regard. Whereas fair assessments are considered important, BS students appeared rather indifferent about the feedback they received and when they received it (although, admittedly, no more so then NBS students).7

Is the preoccupation with summative assessments or lesser concern with intellectual stimulation in BS students illogical or even surprising? In an era in which U.K. student fees have risen inexorably, some may consider it under- standable for instrumental learners in the United Kingdom to exhibit the type of preferences we have identified here. Interestingly, further longitudinal anal- ysis of the data from the U.K. NSS (not reported here) shows that the coeffi- cient on the “fair grades and marking” variable (Question 6) for BS students has increased considerably between 2005 and 2015. Using similar methodol- ogy as for our BS and NBS comparisons (composite dummy variables to test differences in coefficient values between the two periods), we found a large and statistically significant difference between the two coefficients in the two dif- ferent periods. The importance placed on fair assessments by U.K. BS students has therefore been growing. Given the rapid increase in student fees, is it sur- prising that students have become much more concerned about the outcomes of their increasingly expensive personal investments in their university courses?

Our results may seem unsurprising for some, particularly those who have long commented on the prevalence of instrumental learning in BSs (MacFarlane, 2015; Ottewill, 2003; Ottewill & MacFarlane, 2003). They also

26 Journal of Management Education 00(0)

resonate with some studies in the management learning literature that have identified self-interested behaviors as being more prevalent among BS stu- dents (Podolny, 2009; Wang et al., 2012). Nonetheless, evidencing the strong tendency toward instrumentality at the U.K. national level in BS students, as we do here, may give pause for further reflection and possibly spur discus- sion of the phenomenon. Several implications follow.

Implications for Policy Makers, Management Educators, and Business School Administrators

Delivering higher levels of student satisfaction—as measured by the NSS—has become an increasingly important driver of education delivery in U.K. higher education today. This is because of increased competition and the elevation of student satisfaction which has become key to brand development (Corduas et al., 2016). Our results imply, however, that teaching styles which reward instrumental learning approaches are more strongly rewarded in the BS con- text. This is concerning, as much pedagogic research decries instrumental learning as inherently undesirable (Dyer & Hurd, 2016; Ottewill, 2003; Ottewill & MacFarlane, 2003). Some have talked about how it “strikes at the very heart of what has traditionally been regarded as the primary rationale of higher edu- cation” (Ottewill, 2003, p. 195). Yet university administrators and managers, responding to market forces and university funders, now place increasing value on attaining ever higher levels of student satisfaction (MacFarlane, 2015). University league tables afford student satisfaction prominent roles in their ranking systems. Pressures to improve satisfaction scores and ranking are transmitted daily to staff working in U.K. BSs. Our findings, however, suggest careful consideration should be given to the impacts of using overall student satisfaction as a means of measuring teaching quality. It is possible such met- rics, through market-driven evolutionary processes, may lead to the growing predominance of approaches to teaching that support instrumental learning at the expense of what have traditionally been regarded as more desirable alterna- tives, ones involving deeper engagement and learning.

As well as the tendency toward instrumental learning, it is of concern that practices considered conventionally as central to learning often register as only weak drivers of student satisfaction. High-quality assessment and feedback pro- cedures, for example, are widely considered to be vitally important for learning to take place. Yet our findings suggest it is mainly the fairness of assessments that students care about. Is it possible that the increased marketization of higher education, with the growing focus on student satisfaction, may progressively lead to the weakening of assessment and feedback procedures in BS courses? Will BSs that maintain a commitment toward high-quality assessment and

Sutherland et al. 27

feedback practices gradually slip down the satisfaction rankings, as competitors focus their resources in areas that have stronger positive impacts on overall sat- isfaction (such as assessment fairness)? Government policy makers, like those in the United Kingdom, need to carefully consider these possibilities. Educators and administrators in U.K. BSs, moreover, as guardians of the higher education system, need also to confront the possibility of this reality. In the final analysis, it may be that elevating students as consumers of higher education may not always be beneficial for their learning.

Conclusion

Our results raise some interesting and challenging questions regarding the growing reliance on student satisfaction measures as indicators of teaching quality in the United Kingdom. Do ranking systems and league tables based on student satisfaction encourage BSs to teach in ways that support instru- mental learning? And might they, over the longer term, undermine the quality of assessment and feedback practices employed in BSs? Given the elevation of student satisfaction as a driver of higher education delivery, it is clear that more research is needed to find out exactly what drives student satisfaction in BSs. Are these drivers of student satisfaction antithetical to or incompatible with student learning? Our novel attempt to explore satisfaction determinants using the U.K. NSS and its 1.6 million responses suggests some of them may be. Indeed, our results seem to lend support to those who warn of the McDonaldization of the university (Parker & Jary, 1995), in which course standardization driven by a desire to provide what the customer–student (apparently) wants are privileged over more traditional academic values.

Limitations and Future Research

There are rich potential opportunities to further exploit the U.K. NSS data. This work, for example, could involve more detailed comparative analyses across specific subject areas. We used the JACs Level 2, and contrasted BS and the very broad NBS category. It may make sense in future research to use a more specific range of subject categories that seem likely to be similar to BS students because they likely share instrumental motivation (e.g., law) or con- trasting with BS students because instrumental motivation might seem less likely (e.g., philosophy). By doing so, we will be able to get a better idea of the factors that shape the field differences we observed. Also, we have limited demographic data, as we use aggregated responses. BS students, as a popula- tion, may of course be different to NBS students (i.e., in terms of sex, age, nationality, etc.). While for the purposes of our key questions (differences

28 Journal of Management Education 00(0)

between NBS and BS groups) this does not necessarily matter, it may be rel- evant in future studies. Future research could look more at how drivers have evolved over time. We could use earlier survey results to explore, for example, the introduction of student fees and how this influences the drivers of satisfac- tion. International comparisons, moreover, are needed. Do students in the United States or other European countries exhibit similar differences in driv- ers of satisfaction? These are just some of the many areas requiring additional research.

Ideally, future empirical modelling will also employ ordered logit model- ling using individual-level response data. Some may consider our empirical approach to modelling the NSS Likert data as a limitation. The practice we use, however, is commonly used elsewhere and, as we have shown, there are also strong theoretical and practical arguments supporting it (Norman, 2010). We refer those still unconvinced to this literature. It should also be kept in mind that empirical research on student satisfaction drivers in BSs that we identified is based on a cumulative total of around 1,000 student question- naires (see Method section). The findings from our sample—around 250 times larger—marks a considerable step forward in trying to better under- stand the learning preferences of BS students and the possible implications.

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The author(s) received no financial support for the research, authorship, and/or publi- cation of this article.

Notes

1. We did not include part-time students as the available sample of respondents is considerably smaller.

2. JACs is used by the U.K. Higher Education Statistics Agency and the Universities and Colleges Admissions Service (UCAS) to categorize academic subjects.

3. This provides further justification for using the average of student responses at the course level, an approach which weights each course equally, regard- less of size. Course size does not appear to be an important driver of satisfac- tion (a result we have also found at the level of individual modules in other research).

4. As Thomas and Galambos (2004) put it: “teaching and learning appear to have more effect on students’ general satisfaction than the campus services and ameni- ties on which uncritical consumerism might focus attention” (p. 263).

Sutherland et al. 29

5. Since Hearn (1985), unfortunately, there has been limited research on field differences (Broder & Dorfman, 1994). For example, no similar comparative empirical studies of the determinants of satisfaction in BS and NBS subjects exists, despite there being a number of studies on BS subjects alone.

6. This is somewhat ironic given that evidence suggests they are also much more likely to cheat (Mccabe & Butterfield, 2006).

7. This is rather surprising from a pedagogical point of view, as one might expect feedback to be central to learning processes. Indeed, the findings of significant negative coefficients on Questions 5 (“The criteria used in marking have been clear in advance”), 7 (“Feedback on my work has been prompt”), and 9 (“Feedback on my work has helped me clarify things I did not understand”) may raise eyebrows.

References

Arieli, S., Sagiv, L., & Cohen-Shalem, E. (2015). Values in business schools: The role of self-selection and socialization. Academy of Management Learning & Education, 15, 493-507.

Athiyaman, A. (1997). Linking student satisfaction and service quality perceptions: The case of university education. European Journal of Marketing, 31, 528-540. doi:10.1108/03090569710176655

Bennett, R. (2003). Determinants of undergraduate student drop out rates in a univer- sity business studies department. Journal of Further and Higher Education, 27, 123-141. doi:10.1080/030987703200065154

Broder, J. M., & Dorfman, J. H. (1994). Determinants of teaching quality: What’s important to students? Research in Higher Education, 35, 235-249. doi:10.1007/ BF02496703

Buckley, A., Soilemetzidis, I., & Hillman, N. (2015). The 2015 Student Academic Experience Survey. Retrieved from http://www.hepi.ac.uk/wp-content/ uploads/2015/06/AS-PRINTED-HEA_HEPI_report_print4.pdf

Corduas, M., Piscitelli, A., Lantz, B., Dennis, C., Papagiannidis, S., Alamanos, E., & Bourlakis, M. (2016). The role of brand attachment strength in higher education. Journal of Business Research, 11(8), 1-12. doi:10.1016/j.jbusres.2016.01.020

DeShields, O. W., Kara, A., & Kaynak, E. (2005). Determinants of business stu- dent satisfaction and retention in higher education: Applying Herzberg’s two- factor theory. International Journal of Educational Management, 19, 128-139. doi:10.1108/09513540510582426

Dodeen, H. (2016). Student evaluations of instructors in higher education: A struc- tural equation modeling. Research in Higher Education Journal, 31, 1-15.

Douglas, J. A., Douglas, A., McClelland, R. J., & Davies, J. (2014). Understanding student satisfaction and dissatisfaction: An interpretive study in the UK higher education context. Studies in Higher Education, 40, 329-349. doi:10.1080/0307 5079.2013.842217

Dyer, S. L., & Hurd, F. (2016). “What’s going on?” Developing reflexivity in the manage- ment classroom: From surface to deep learning and everything in between. Academy of Management Learning & Education, 15, 287-303. doi:10.5465/amle.2014.0104

http://www.hepi.ac.uk/wp-content/uploads/2015/06/AS-PRINTED-HEA_HEPI_report_print4.pdf
http://www.hepi.ac.uk/wp-content/uploads/2015/06/AS-PRINTED-HEA_HEPI_report_print4.pdf
30 Journal of Management Education 00(0)

Entwistle, N., & Tate, H. (1990) Approaches to learning, evaluations of teachng, and preferences for contrasting academic environments. Higher Education, 19(2), 169-194.

Filak, V. F., & Sheldon, K. M. (2003). Student psychological need satisfaction and college teacher-course evaluations. Educational Psychology, 23, 235-247. doi:10.1080/0144341032000060084

Gibson, A. (2010). Measuring business student satisfaction: a review and summary of the major predictors. Journal of Higher Education Policy and Management, 32, 251-259. doi:10.1080/13600801003743349

Green, H. J., Hood, M., & Neumann, D. L. (2015). Predictors of student satisfaction with university psychology courses: A review. Psychology Learning & Teaching, 14, 131-146. doi:10.1177/1475725715590959

Hearn, J. C. (1985). Determinants of college-students overall evaluations of their academic programs. Research in Higher Education, 23, 413-437. doi:10.1007/Bf00973688

Hill, Y., Lomas, L., & MacGregor, J. (2003). Students’ perceptions of quality in higher education. Quality Assurance in Education, 11, 15-20. doi:10.1108/ 09684880310462047

Jamieson, S. (2004). Likert scales: How to (ab)use them. Medical Education, 38, 1212-1218.

Kandiko, C., & Mawer, M. (2014). Student expectations and perceptions of higher education, a study of UK higher education. Kings College Londo/QAA. Retrieved from https://www.kcl.ac.uk/study/learningteaching/kli/People/Research/DL/ QAAReport.pdf

Koris, R., Ortenblad, A., & Ojala, T. (2016). From maintaining the status quo to pro- moting free thinking and inquiry: Business students perspective on the purpose of business school teaching. Management Learning, 48, 174-186. doi:10.1177/ 1350507616668480

Krahn, H., & Bowlby, J. W. (1997). Good teaching and satisfied university graduates. Canadian Journal of Higher Education, 27, 157-179.

Lenton, P. (2015). Determining student satisfaction: An economic analysis of the National Student Survey. Economics of Education Review, 47, 118-127. doi:10.1016/j.econedurev.2015.05.001

Letcher, D., & Neves, J. (2010). Determinants of undergraduate business student sat- isfaction. Research in Higher Education Journal, 6, 1-26.

Lucas, U., & Myer, J. (2005). “Towards a mapping of the student world”: the identi- fication of variations in students’ conceptions of, and motivations to learn, intro- ductory accounting. The British Accounting Review, 37, 177-204.

MacFarlane, B. (2015). Student performativity in higher education : Converting learning as a private space into a public performance as a private space into a public performance. Higher Education Research & Development, 34, 338-350. doi:10.1080/07294360.2014.956697

Malik, M. E., Danish, R. Q., & Usman, A. (2010). The impact of service quality on students’ satisfaction in higher education institutes of Punjab. Journal of Management Research, 2, 1-11. doi:10.5296/jmr.v2i2.418

Marton, F., & Saljo, R. (1976). On qualitative differences in learning. British Journal of Educational Psychology, 46, 4-11.

https://www.kcl.ac.uk/study/learningteaching/kli/People/Research/DL/QAAReport.pdf
https://www.kcl.ac.uk/study/learningteaching/kli/People/Research/DL/QAAReport.pdf
Sutherland et al. 31

Mccabe, D. L., & Butterfield, K. D. (2006). Academic dishonesty in graduate busi- ness programs: Prevalence, causes, and proposed action, 5, 294-305. doi:10.5465/ AMLE.2006.22697018

Nadiri, H., Kandampully, J., & Hussain, K. (2009). Students’ perceptions of service quality in higher education. Total Quality Management & Business Excellence, 20, 523-535. doi:10.1080/14783360902863713

Neumann, Y., & Neumann, L. (1981). Determinants of students satisfaction with course work: An international comparison between two universities. Research in Higher Education, 14, 321-333. doi:10.1007/Bf00976682

Neves, J., & Hillman, N. (2016). The 2016 student academic experience survey. York: Higher Education Academy. Retrieved from https://www.heacademy. ac.uk/system/files/student_academic_experience_survey_2016_hea-hepi_final_ version_07_june_16_ws.pdf

Norman, G. (2010). Likert scales, levels of measurement and the “laws” of statistics. Advances in Health Sciences Education, 15, 625-632. doi:10.1007/s10459-010- 9222-y

O’Donovan, B., Rust, C., & Price, M. (2016). A scholarly approach to solving the feedback dilemma in practice. Assessment & Evaluation in Higher Education, 41, 938-949. doi:10.1080/02602938.2015.1052774

Ottewill, R. M. (2003). What’s wrong with instrumental learning? The case of business and management. Education + Training, 45, 189-196. doi:10.1108/ 00400910310478111

Ottewill, R. M., & MacFarlane, B. J. (2003). Pedagogic challenges facing business and management educators: Assessing the evidence. International Journal of Management Education, 3(3), 33-41.

Parker, M., & Jary, D. (1995). The McUniversity: Organization, management and academic subjectivity. Organization, 2, 319-338.

Podolny, J. M. (2009, June). The buck stops (and starts) at business school. Harvard Business Review. Retrieved from https://hbr.org/2009/06/the-buck-stops-and- starts-at-business-school

Popp, N., Weight, E. A., Dwyer, B., Morse, A. L., & Baker, A. (2015). Assessing student satisfaction within sport management master’s degree programs. Sport Management Education Journal, 9(1), 25-38.

Prosser, M., & Trigwell, K. (1999). Understanding learning and teaching: The expe- rience in higher education. Buckingham, England: Open University Press.

Rienties, B., Li, N., & Marsh, V. (2015). Modelling and managing student satis- faction: Use of student feedback to enhance learning experience subscriber. Retrieved from https://pdfs.semanticscholar.org/49b8/a37b6090d179b31b93cb- d5bc9fdce5c93e5d.pdf

Rynes, S. L., Lawson, A. M., Ilies, R., & Trank, C. Q. (2003). Behavioral coursework in business education: Growing evidence of a legitimacy crisis, 2, 269-283.

Serenko, A. (2011). Student satisfaction with Canadian music programmes: The application of the American customer satisfaction model in higher education.

https://www.heacademy.ac.uk/system/files/student_academic_experience_survey_2016_hea-hepi_final_version_07_june_16_ws.pdf
https://www.heacademy.ac.uk/system/files/student_academic_experience_survey_2016_hea-hepi_final_version_07_june_16_ws.pdf
https://www.heacademy.ac.uk/system/files/student_academic_experience_survey_2016_hea-hepi_final_version_07_june_16_ws.pdf
https://hbr.org/2009/06/the-buck-stops-and-starts-at-business-school
https://hbr.org/2009/06/the-buck-stops-and-starts-at-business-school
https://pdfs.semanticscholar.org/49b8/a37b6090d179b31b93cbd5bc9fdce5c93e5d.pdf
https://pdfs.semanticscholar.org/49b8/a37b6090d179b31b93cbd5bc9fdce5c93e5d.pdf
32 Journal of Management Education 00(0)

Assessment & Evaluation in Higher Education, 36, 281-299. doi:10.1080/ 02602930903337612

Shurden, M., Santandreu, J., & Shurden, S. (2016). An application of partial least squares path analysis to student satisfaction. Academy of Educational Leadership Journal, 20(2), 51-62.

Tessema, M. T., Ready, K., & Yu, W. W. (2012). Factors affecting college stu- dents’ satisfaction with major curriculum: Evidence from nine years of data. International Journal of Humanities and Social Science, 2(2), 1-44.

Thomas, E. H., & Galambos, N. (2004). What satisfies students? Mining student- opinion data with regression and decision tree. Research in Higher Education, 45, 251-269.

Wang, L., Malhotra, D., & Murnighan, J. K. (2012). Economics education and greed. Academy of Management Learning & Education, 10, 643-660. doi:10.5465/ amle.2009.0185

Weaver, M. R. (2006). Do students value feedback? Student perceptions of tutors’ written responses. Assessment & Evaluation in Higher Education, 31, 379-394. doi:10.1080/02602930500353061

Williams, J., & Mindano, G. (2015). Subscriber Research Series, 2015-16: The role of student satisfaction data in quality assurance and enhancement: How provid- ers use data to improve the student experience. Gloucester: Quality Assurance Agency for Higher Education.

Order from us and get better grades. We are the service you have been looking for.