Embed Size (px)
8/6/2019 MB0034 Set1&2
MB0034 - RESEARCH METHODOLOGY
Q 1. Give examples of specific situations that would call for the following types ofresearch, explaining why a) Exploratory research b) Descriptive research c) Diagnostic
research d) Evaluation research.
Ans.: Research may be classified crudely according to its major intent or the methods. Accordingto the intent, research may be classified as:Basic (aka fundamental or pure) research is driven by a scientist's curiosity or interest in ascientific question. The main motivation is to expand man's knowledge, not to create or inventsomething. There is no obvious commercial value to the discoveries that result from basicresearch.
For example, basic science investigations probe for answers to questions such as: How did the universe begin?
What are protons, neutrons, and electrons composed of?
How do slime molds reproduce?
What is the specific genetic code of the fruit fly?
Most scientists believe that a basic, fundamental understanding of all branches of science isneeded in order for progress to take place. In other words, basic research lays down thefoundation for the applied science that follows. If basic work is done first, then applied spin-offsoften eventually result from this research. As Dr. George Smoot of LBNL says, "People cannotforesee the future well enough to predict what's going to develop from basic research. If we onlydid applied research, we would still be making better spears."
Applied research is designed to solve practical problems of the modern world, rather than to
acquire knowledge for knowledge's sake. One might say that the goal of the applied scientist is toimprove the human condition.
For example, applied researchers may investigate ways to: Improve agricultural crop production
Treat or cure a specific disease
Improve the energy efficiency of homes, offices, or modes of transportation
Some scientists feel that the time has come for a shift in emphasis away from purely basicresearch and toward applied science. This trend, they feel, is necessitated by the problemsresulting from global overpopulation, pollution, and the overuse of the earth's natural resources.
Exploratory research provides insights into and comprehension of an issue or situation. Itshould draw definitive conclusions only with extreme caution. Exploratory research is a type ofresearch conducted because a problem has not been clearly defined. Exploratory research helpsdetermine the best research design, data collection method and selection of subjects. Given itsfundamental nature, exploratory research often concludes that a perceived problem does notactually exist.Exploratory research often relies on secondary research such as reviewing available literatureand/or data, or qualitative approaches such as informal discussions with consumers, employees,management or competitors, and more formal approaches through in-depth interviews, focus
8/6/2019 MB0034 Set1&2
groups, projective methods, case studies or pilot studies. The Internet allows for researchmethods that are more interactive in nature: E.g., RSS feeds efficiently supply researchers withup-to-date information; major search engine search results may be sent by email to researchersby services such as Google Alerts; comprehensive search results are tracked over lengthyperiods of time by services such as Google Trends; and Web sites may be created to attractworldwide feedback on any subject.The results of exploratory research are not usually useful for decision-making by themselves, butthey can provide significant insight into a given situation. Although the results of qualitativeresearch can give some indication as to the "why", "how" and "when" something occurs, it cannottell us "how often" or "how many."Exploratory research is not typically generalizable to the population at large.A defining characteristic of causal research is the random assignment of participants to theconditions of the experiment; e.g., an Experimental and a Control Condition... Such assignmentresults in the groups being comparable at the beginning of the experiment. Any differencebetween the groups at the end of the experiment is attributable to the manipulated variable.Observational research typically looks for difference among "in-tact" defined groups. A commonexample compares smokers and non-smokers with regard to health problems. Causalconclusions can't be drawn from such a study because of other possible differences between thegroups; e.g., smokers may drink more alcohol than non-smokers. Other unknown differencescould exist as well. Hence, we may see a relation between smoking and health but a conclusion
that smoking is a cause would not be warranted in this situation. (Cp)Descriptive research, also known as statistical research, describes data and characteristicsabout the population or phenomenon being studied. Descriptive research answers the questionswho, what, where, when and how.Although the data description is factual, accurate and systematic, the research cannot describewhat caused a situation. Thus, descriptive research cannot be used to create a causalrelationship, where one variable affects another. In other words, descriptive research can be saidto have a low requirement for internal validity.The description is used for frequencies, averages and other statistical calculations. Often the bestapproach, prior to writing descriptive research, is to conduct a survey investigation. Qualitativeresearch often has the aim of description and researchers may follow-up with examinations ofwhy the observations exist and what the implications of the findings are.In short descriptive research deals with everything that can be counted and studied. But there are
always restrictions to that. Your research must have an impact to the life of the people aroundyou. For example, finding the most frequent disease that affects the children of a town. Thereader of the research will know what to do to prevent that disease thus; more people will live ahealthy life.Diagnostic study: it is similar to descriptive study but with different focus. It is directed towardsdiscovering what is happening and what can be done about. It aims at identifying the causes of aproblem and the possible solutions for it. It may also be concerned with discovering and testingwhether certain variables are associated. This type of research requires prior knowledge of theproblem, its thorough formulation, clear-cut definition of the given population, adequate methodsfor collecting accurate information, precise measurement of variables, statistical analysis and testof significance.Evaluation Studies: it is a type of applied research. It is made for assessing the effectiveness ofsocial or economic programmes implemented or for assessing the impact of development of the
project area. It is thus directed to assess or appraise the quality and quantity of an activity and itsperformance and to specify its attributes and conditions required for its success. It is concernedwith causal relationships and is more actively guided by hypothesis. It is concerned also withchange over time.
Action research is a reflective process of progressive problem solving led by individuals workingwith others in teams or as part of a "community of practice" to improve the way they addressissues and solve problems. Action research can also be undertaken by larger organizations orinstitutions, assisted or guided by professional researchers, with the aim of improving theirstrategies, practices, and knowledge of the environments within which they practice. As designersand stakeholders, researchers work with others to propose a new course of action to help their
8/6/2019 MB0034 Set1&2
community improve its work practices (Center for Collaborative Action Research). Kurt Lewin,then a professor at MIT, first coined the term action research in about 1944, and it appears inhis 1946 paper Action Research and Minority Problems. In that paper, he described actionresearch as a comparative research on the conditions and effects of various forms of socialaction and research leading to social action that uses a spiral of steps, each of which iscomposed of a circle of planning, action, and fact-finding about the result of the action.Action research is an interactive inquiry process that balances problem solving actionsimplemented in a collaborative context with data-driven collaborative analysis or research tounderstand underlying causes enabling future predictions about personal and organizationalchange (Reason & Bradbury, 2001). After six decades of action research development, manymethodologies have evolved that adjust the balance to focus more on the actions taken or moreon the research that results from the reflective understanding of the actions. This tension existsbetween
those that are more driven by the researchers agenda to those more driven by
Those that are motivated primarily by instrumental goal attainment to those motivatedprimarily by the aim of personal, organizational, or societal transformation; and
1st-, to 2nd-, to 3rd-person research, that is, my research on my own action, aimed
primarily at personal change; our research on our group (family/team), aimedprimarily at improving the group; and scholarly research aimed primarily attheoretical generalization and/or large scale change.
Action research challenges traditional social science, by moving beyond reflective knowledgecreated by outside experts sampling variables to an active moment-to-moment theorizing, datacollecting, and inquiring occurring in the midst of emergent structure. Knowledge is alwaysgained through action and for action. From this starting point, to question the validity of socialknowledge is to question, not how to develop a reflective science about action, but how todevelop genuinely well-informed action how to conduct an action science (Tolbert 2001).
Q 2.In the context of hypothesis testing, briefly explain the difference between a) Null and
alternative hypothesis b) Type 1 and type 2 error c) Two tailed and one tailed test d)Parametric and non-parametric tests.
Ans.:Some basic concepts in the context of testing of hypotheses are explained below -11) Null Hypotheses and Alternative Hypotheses: In the context of statistical analysis,
we often talk about null and alternative hypotheses. If we are to compare thesuperiority of method A with that of method B and we proceed on the assumption thatboth methods are equally good, then this assumption is termed as a null hypothesis.On the other hand, if we think that method A is superior, then it is known as analternative hypothesis.
These are symbolically represented as:Null hypothesis = H0 and Alternative hypothesis = HaSuppose we want to test the hypothesis that the population mean is equal to the hypothesized
mean ( H0) = 100. Then we would say that the null hypothesis is that the population mean isequal to the hypothesized mean 100 and symbolically we can express it as: H0: = H0=100If our sample results do not support this null hypothesis, we should conclude that something elseis true. What we conclude rejecting the null hypothesis is known as an alternative hypothesis. Ifwe accept H0, then we are rejecting Ha and if we reject H0, then we are accepting Ha. For H0:= H0=100, we may consider three possible alternative hypotheses as follows:
8/6/2019 MB0034 Set1&2
To be read as follows
Ha: H0 (The alternative hypothesis is that the population mean is not equal to 100i.e., it may be more or less 100)
Ha: > H0 (The alternative hypothesis is that the population mean is greater than100)
Ha: < H0 (The alternative hypothesis is that the population mean is less than 100)
The null hypotheses and the alternative hypotheses are chosen before the sample is drawn (theresearcher must avoid the error of deriving hypotheses from the data he collects and testing thehypotheses from the same data). In the choice of null hypothesis, the following considerations areusually kept in view:
1a. The alternative hypothesis is usually the one, which is to be proved, and the nullhypothesis is the one that is to be disproved. Thus a null hypothesis representsthe hypothesis we are trying to reject, while the alternative hypothesis represents
all other possibilities.2b. If the rejection of a certain hypothesis when it is actually true involves great risk, it
is taken as null hypothesis, because then the probability of rejecting it when it istrue is (the level of significance) which is chosen very small.
3c. The null hypothesis should always be a specific hypothesis i.e., it should not statean approximate value.
Generally, in hypothesis testing, we proceed on the basis of the null hypothesis, keeping thealternative hypothesis in view. Why so? The answer is that on the assumption that the nullhypothesis is true, one can assign the probabilities to different possible sample results, but thiscannot be done if we proceed with alternative hypotheses. Hence the use of null hypotheses (attimes also known as statistical hypotheses) is quite frequent.
12) The Level of Significance: This is a very important concept in the context ofhypothesis testing. It is always some percentage (usually 5%), which should be
chosen with great care, thought and reason. In case we take the significancelevel at 5%, then this implies that H0 will be rejected when the sampling result(i.e., observed evidence) has a less than 0.05 probability of occurring if H0 istrue. In other words, the 5% level of significance means that the researcher iswilling to take as much as 5% risk rejecting the null hypothesis when it (H0)happens to be true. Thus the significance level is the maximum value of theprobability of rejecting H0 when it is true and is usually determined in advancebefore testing the hypothesis.
23) Decision Rule or Test of Hypotheses: Given a hypothesis Ha and analternative hypothesis H0, we make a rule, which is known as a decision rule,according to which we accept H0 (i.e., reject Ha) or reject H0 (i.e., accept Ha).For instance, if H0 is that a certain lot is good (there are very few defective itemsin it), against Ha, that the lot is not good (there are many defective items in it),
then we must decide the number of items to be tested and the criterion foraccepting or rejecting the hypothesis. We might test 10 items in the lot and planour decision saying that if there are none or only 1 defective item among the 10,we will accept H0; otherwise we will reject H0 (or accept Ha). This sort of basis isknown as a decision rule.
34) Type I & II Errors: In the context of testing of hypotheses, there are basically twotypes of errors that we can make. We may reject H0 when H0 is true and we mayaccept H0 when it is not true. The former is known as Type I and the latter isknown as Type II. In other words, Type I error means rejection of hypotheses,
8/6/2019 MB0034 Set1&2
which should have been accepted, and Type II error means accepting ofhypotheses, which should have been rejected. Type I error is denoted by (alpha), also called as level of significance of test; and Type II error is denoted by(beta).
Accept H0 Reject H0
H0 (true) Correct decision Type I error ( error)
Ho (false) Type II error ( error) Correct decision
The probability of Type I error is usually determined in advance and is understood as the level ofsignificance of testing the hypotheses. If type I error is fixed at 5%, it means there are about 5chances in 100 that we will reject H0 when H0 is true. We can control type I error just by fixing itat a lower level. For instance, if we fix it at 1%, we will say that the maximum probability ofcommitting type I error would only be 0.01.But with a fixed sample size n, when we try to reduce type I error, the probability of committingtype II error increases. Both types of errors cannot be reduced simultaneously, since there is atrade-off in business situations. Decision makers decide the appropriate level of type I error byexamining the costs of penalties attached to both types of errors. If type I error involves time andtrouble of reworking a batch of chemicals that should have been accepted, whereas type II errormeans taking a chance that an entire group of users of this chemicals compound will bepoisoned, then in such a situation one should prefer a type I error to a type II error. As a result,one must set a very high level for type I error in ones testing techniques of a given hypothesis.Hence, in testing of hypotheses, one must make all possible efforts to strike an adequate balancebetween Type I & Type II error.1
0 5) Two Tailed Test & One Tailed Test: In the context of hypothesis testing, these twoterms are quite important and must be clearly understood. A two-tailed test rejects the nullhypothesis if, say, the sample mean is significantly higher or lower than the hypothesized value ofthe mean of the population. Such a test is inappropriate when we have H0: = H0 and Ha: H0 which may > H0 or
8/6/2019 MB0034 Set1&2
Kernel density estimation provides better estimates of the density than histograms.Nonparametric regression and semi parametric regression methods have been developed basedon kernels, splines, and wavelets.Data Envelopment Analysisprovides efficiency coefficients similar to those obtainedbyMultivariate Analysiswithout any distributional assumption.
Q 3. Explain the difference between a causal relationship and correlation, with an exampleof each. What are the possible reasons for a correlation between two variables?
Ans.: Correlation: The correlation is knowing what the consumer wants, and providing it.Marketing research looks at trends in sales and studies all of the variables, i.e. price, color,availability, and styles, and the best way to give the customer what he or she wants. If you cangive the customer what they want, they will buy, and let friends and family know where they got it.Making them happy makes the money.
Casual relationship Marketing was first defined as a form ofmarketing developed from directresponse marketing campaigns, which emphasizes customer retention and satisfaction, ratherthan a dominant focus on sales transactions.
As a practice, Relationship Marketing differs from other forms of marketing in that it recognizesthe long term value of customer relationships and extends communication beyond intrusiveadvertising and sales promotional messages.
With the growth of the internet and mobile platforms, Relationship Marketing has continued toevolve and move forward as technology opens more collaborative and social communicationchannels. This includes tools for managing relationships with customers that goes beyond simpledemographic and customer service data. Relationship Marketing extends to include InboundMarketing efforts (a combination of search optimization and Strategic Content), PR, Social Mediaand Application Development.
Just like Customer relationship management(CRM), Relationship Marketing is a broadly
recognized, widely-implemented strategy for managing and nurturing a companys interactionswith clients and sales prospects. It also involves using technology to, organize, synchronizebusiness processes (principally sales and marketing activities) and most importantly, automatethose marketing and communication activities on concrete marketing sequences that could run inautopilot (also known as marketing sequences). The overall goals are to find, attract, and win newclients, nurture and retain those the company already has, entice former clients back into the fold,and reduce the costs of marketing and client service.  Once simply a label for a category ofsoftware tools, today, it generally denotes a company-wide business strategy embracing all client-facing departments and even beyond. When an implementation is effective, people, processes,and technology work in synergy to increase profitability, and reduce operational costs
Reasons for a correlation between two variables: Chance association, (the relationship is dueto chance) or causative association (one variable causes the other).The information given by a correlation coefficient is not enough to define the dependencestructure between random variables. The correlation coefficient completely defines thedependence structure only in very particular cases, for example when the distribution is amultivariate normal distribution. (See diagram above.) In the case of elliptic distributions itcharacterizes the (hyper-)ellipses of equal density, however, it does not completely characterizethe dependence structure (for example, a multivariate t-distribution's degrees of freedomdetermine the level of tail dependence).
8/6/2019 MB0034 Set1&2
Distance correlation and Brownian covariance / Brownian correlation  were introduced toaddress the deficiency of Pearson's correlation that it can be zero for dependent randomvariables; zero distance correlation and zero Brownian correlation imply independence.
The correlation ratio is able to detect almost any functional dependency, or the entropy-basedmutual information/total correlation which is capable of detecting even more general
dependencies. The latter are sometimes referred to as multi-moment correlation measures, incomparison to those that consider only 2nd moment (pairwise or quadratic) dependence.
The polychoric correlation is another correlation applied to ordinal data that aims to estimate thecorrelation between theorised latent variables.
One way to capture a more complete view of dependence structure is to consider a copulabetween them.
Q 4. Briefly explain any two factors that affect the choice of a sampling technique. Whatare the characteristics of a good sample?
Ans.: The difference between non-probability and probability sampling is that non-probabilitysampling does not involve random selection and probability sampling does. Does that mean thatnon-probability samples aren't representative of the population? Not necessarily. But it doesmean that non-probability samples cannot depend upon the rationale of probability theory. Atleast with a probabilistic sample, we know the odds or probability that we have represented thepopulation well. We are able to estimate confidence intervals for the statistic. With non-probabilitysamples, we may or may not represent the population well, and it will often be hard for us to knowhow well we've done so. In general, researchers prefer probabilistic or random sampling methodsover non probabilistic ones, and consider them to be more accurate and rigorous. However, inapplied social research there may be circumstances where it is not feasible, practical or
theoretically sensible to do random sampling. Here, we consider a wide range of non-probabilisticalternatives.
We can divide non-probability sampling methods into two broad types:Accidentalorpurposive.
Most sampling methods are purposive in nature because we usually approach thesampling problem with a specific plan in mind. The most important distinctions among these typesof sampling methods are the ones between the different types of purposive sampling approaches.
Accidental, Haphazard or Convenience Sampling
One of the most common methods of sampling goes under the various titles listed here. Iwould include in this category the traditional "man on the street" (of course, now it's probably the
"person on the street") interviews conducted frequently by television news programs to get aquick (although non representative) reading of public opinion. I would also argue that the typicaluse of college students in much psychological research is primarily a matter of convenience. (Youdon't really believe that psychologists use college students because they believe they'rerepresentative of the population at large, do you?). In clinical practice, we might use clients whoare available to us as our sample. In many research contexts, we sample simply by asking forvolunteers. Clearly, the problem with all of these types of samples is that we have no evidencethat they are representative of the populations we're interested in generalizing to -- and in manycases we would clearly suspect that they are not.
8/6/2019 MB0034 Set1&2
In purposive sampling, we sample with apurpose in mind. We usually would have one ormore specific predefined groups we are seeking. For instance, have you ever run into people in amall or on the street who are carrying a clipboard and who are stopping various people andasking if they could interview them? Most likely they are conducting a purposive sample (andmost likely they are engaged in market research). They might be looking for Caucasian females
between 30-40 years old. They size up the people passing by and anyone who looks to be in thatcategory they stop to ask if they will participate. One of the first things they're likely to do is verifythat the respondent does in fact meet the criteria for being in the sample. Purposive sampling canbe very useful for situations where you need to reach a targeted sample quickly and wheresampling for proportionality is not the primary concern. With a purposive sample, you are likely toget the opinions of your target population, but you are also likely to overweight subgroups in yourpopulation that are more readily accessible.
All of the methods that follow can be considered subcategories of purposive samplingmethods. We might sample for specific groups or types of people as in modal instance, expert, orquota sampling. We might sample for diversity as in heterogeneity sampling. Or, we mightcapitalize on informal social networks to identify specific respondents who are hard to locateotherwise, as in snowball sampling. In all of these methods we know what we want -- we aresampling with a purpose.
Modal Instance SamplingIn statistics, the mode is the most frequently occurring value in a distribution. In sampling, whenwe do a modal instance sample, we are sampling the most frequent case, or the "typical" case. Ina lot of informal public opinion polls, for instance, they interview a "typical" voter. There are anumber of problems with this sampling approach. First, how do we know what the "typical" or"modal" case is? We could say that the modal voter is a person who is of average age,educational level, and income in the population. But, it's not clear that using the averages of theseis the fairest (consider the skewed distribution of income, for instance). And, how do you knowthat those three variables -- age, education, income -- are the only or even the most relevant forclassifying the typical voter? What if religion or ethnicity is an important discriminator? Clearly,modal instance sampling is only sensible for informal sampling contexts.
Expert sampling involves the assembling of a sample of persons with known or demonstrableexperience and expertise in some area. Often, we convene such a sample under the auspices ofa "panel of experts." There are actually two reasons you might do expert sampling. First, becauseit would be the best way to elicit the views of persons who have specific expertise. In this case,expert sampling is essentially just a specific sub case of purposive sampling. But the other reasonyou might use expert sampling is to provide evidence for the validity of another samplingapproach you've chosen. For instance, let's say you do modal instance sampling and areconcerned that the criteria you used for defining the modal instance are subject to criticism. Youmight convene an expert panel consisting of persons with acknowledged experience and insightinto that field or topic and ask them to examine your modal definitions and comment on theirappropriateness and validity. The advantage of doing this is that you aren't out on your own tryingto defend your decisions -- you have some acknowledged experts to back you. The disadvantageis that even the experts can be, and often are, wrong.
Quota SamplingIn quota sampling, you select people non-randomly according to some fixed quota. There are twotypes of quota sampling:proportionaland non proportional. In proportional quota sampling youwant to represent the major characteristics of the population by sampling a proportional amountof each. For instance, if you know the population has 40% women and 60% men, and that youwant a total sample size of 100, you will continue sampling until you get those percentages andthen you will stop. So, if you've already got the 40 women for your sample, but not the sixty men,you will continue to sample men but even if legitimate women respondents come along, you will
8/6/2019 MB0034 Set1&2
not sample them because you have already "met your quota." The problem here (as in muchpurposive sampling) is that you have to decide the specific characteristics on which you will basethe quota. Will it be by gender, age, education race, religion, etc.?Non-proportional quota sampling is a bit less restrictive. In this method, you specify theminimum number of sampled units you want in each category. Here, you're not concerned withhaving numbers that match the proportions in the population. Instead, you simply want to haveenough to assure that you will be able to talk about even small groups in the population. Thismethod is the non-probabilistic analogue of stratified random sampling in that it is typically usedto assure that smaller groups are adequately represented in your sample.
Heterogeneity SamplingWe sample for heterogeneity when we want to include all opinions or views, and we aren'tconcerned about representing these views proportionately. Another term for this is sampling fordiversity. In many brainstorming or nominal group processes (including concept mapping), wewould use some form of heterogeneity sampling because our primary interest is in getting broadspectrum of ideas, not identifying the "average" or "modal instance" ones. In effect, what wewould like to be sampling is not people, but ideas. We imagine that there is a universe of allpossible ideas relevant to some topic and that we want to sample this population, not thepopulation of people who have the ideas. Clearly, in order to get all of the ideas, and especiallythe "outlier" or unusual ones, we have to include a broad and diverse range of participants.
Heterogeneity sampling is, in this sense, almost the opposite of modal instance sampling.
Snowball SamplingIn snowball sampling, you begin by identifying someone who meets the criteria for inclusion inyour study. You then ask them to recommend others who they may know who also meet thecriteria. Although this method would hardly lead to representative samples, there are times whenit may be the best method available. Snowball sampling is especially useful when you are tryingto reach populations that are inaccessible or hard to find. For instance, if you are studying thehomeless, you are not likely to be able to find good lists of homeless people within a specificgeographical area. However, if you go to that area and identify one or two, you may find that theyknow very well whom the other homeless people in their vicinity are and how you can find them.Characteristics of good Sample: The decision process is a complicated one. The researcherhas to first identify the limiting factor or factors and must judiciously balance the conflicting
factors. The various criteria governing the choice of the sampling technique are:11. Purpose of the Survey: What does the researcher aim at? If he intends to
generalize the findings based on the sample survey to the population, then anappropriate probability sampling method must be selected. The choice of a particulartype of probability sampling depends on the geographical area of the survey and thesize and the nature of the population under study.
22.Measurability: The application of statistical inference theory requires computation ofthe sampling error from the sample itself. Only probability samples allow suchcomputation. Hence, where the research objective requires statistical inference, thesample should be drawn by applying simple random sampling method or stratifiedrandom sampling method, depending on whether the population is homogenous orheterogeneous.
33.Degree of Precision: Should the results of the survey be very precise, or could evenrough results serve the purpose? The desired level of precision is one of the criteriafor sampling method selection. Where a high degree of precision of results is desired,probability sampling should be used. Where even crude results would serve thepurpose (E.g., marketing surveys, readership surveys etc), any convenient non-random sampling like quota sampling would be enough.
44. Information about Population: How much information is available about thepopulation to be studied? Where no list of population and no information about itsnature are available, it is difficult to apply a probability sampling method. Then anexploratory study with non-probability sampling may be done to gain a better idea of
8/6/2019 MB0034 Set1&2
the population. After gaining sufficient knowledge about the population through theexploratory study, an appropriate probability sampling design may be adopted.
55. The Nature of the Population: In terms of the variables to be studied, is thepopulation homogenous or heterogeneous? In the case of a homogenous population,even simple random sampling will give a representative sample. If the population isheterogeneous, stratified random sampling is appropriate.
66. Geographical Area of the Study and the Size of the Population : If the areacovered by a survey is very large and the size of the population is quite large, multi-stage cluster sampling would be appropriate. But if the area and the size of thepopulation are small, single stage probability sampling methods could be used.
77. Financial Resources: If the available finance is limited, it may become necessary tochoose a less costly sampling plan like multistage cluster sampling, or even quotasampling as a compromise. However, if the objectives of the study and the desiredlevel of precision cannot be attained within the stipulated budget, there is noalternative but to give up the proposed survey. Where the finance is not a constraint,a researcher can choose the most appropriate method of sampling that fits theresearch objective and the nature of population.
88. Time Limitation: The time limit within which the research project should becompleted restricts the choice of a sampling method. Then, as a compromise, it may
become necessary to choose less time consuming methods like simple randomsampling, instead of stratified sampling/sampling with probability proportional to size;or multi-stage cluster sampling, instead of single-stage sampling of elements. Ofcourse, the precision has to be sacrificed to some extent.
99. Economy: It should be another criterion in choosing the sampling method. It meansachieving the desired level of precision at minimum cost. A sample is economical ifthe precision per unit cost is high, or the cost per unit of variance is low. The abovecriteria frequently conflict with each other and the researcher must balance and blendthem to obtain a good sampling plan. The chosen plan thus represents an adaptationof the sampling theory to the available facilities and resources. That is, it represents acompromise between idealism and feasibility. One should use simple workablemethods, instead of unduly elaborate and complicated techniques.
Q 5. Select any topic for research and explain how you will use both secondary andprimary sources to gather the required information.
Ans.:Primary Sources of DataPrimary sources are original sources from which the researcher directly collects data that has notbeen previously collected, e.g., collection of data directly by the researcher on brand awareness,brand preference, and brand loyalty and other aspects of consumer behavior, from a sample ofconsumers by interviewing them. Primary data is first hand information collected through variousmethods such as surveys, experiments and observation, for the purposes of the project
immediately at hand.The advantages of primary data are 1 It is unique to a particular research study
2 It is recent information, unlike published information that is already available
The disadvantages are 1 It is expensive to collect, compared to gathering information from available
sources2 Data collection is a time consuming process
8/6/2019 MB0034 Set1&2
3 It requires trained interviewers and investigators
2 Secondary Sources of DataThese are sources containing data, which has been collected and compiled for another purpose.Secondary sources may be internal sources, such as annual reports, financial statements, salesreports, inventory records, minutes of meetings and other information that is available within the
firm, in the form of a marketing information system. They may also be external sources, such asgovernment agencies (e.g. census reports, reports of government departments), publishedsources (annual reports of currency and finance published by the Reserve Bank of India,publications of international organizations such as the UN, World Bank and InternationalMonetary Fund, trade and financial journals, etc.), trade associations (e.g. Chambers ofCommerce) and commercial services (outside suppliers of information).Methods of Data Collection:The researcher directly collects primary data from its original sources. In this case, the researchercan collect the required data precisely according to his research needs and he can collect themwhen he wants and in the form that he needs it. But the collection of primary data is costly andtime consuming. Yet, for several types of social science research, required data is not availablefrom secondary sources and it has to be directly gathered from the primary sources.Primary data has to be gathered in cases where the available data is inappropriate, inadequate orobsolete. It includes: socio economic surveys, social anthropological studies of rural communitiesand tribal communities, sociological studies of social problems and social institutions, marketingresearch, leadership studies, opinion polls, attitudinal surveys, radio listening and T.V. viewingsurveys, knowledge-awareness practice (KAP) studies, farm management studies, businessmanagement studies etc.There are various methods of primary data collection, including surveys, audits and panels,observation and experiments.1 Survey ResearchA survey is a fact-finding study. It is a method of research involving collection of data directly froma population or a sample at a particular time. A survey has certain characteristics:1 It is always conducted in a natural setting. It is a field study.
2 It seeks responses directly from the respondents.3 It can cover a very large population.4 It may include an extensive study or an intensive study5 It covers a definite geographical area.
A survey involves the following steps -
1 Selection of a problem and its formulation2 Preparation of the research design3 Operation concepts and construction of measuring indexes and scales4 Sampling5 Construction of tools for data collection6 Field work and collection of data7 Processing of data and tabulation
8 Analysis of data9 Reporting
There are four basic survey methods, which include:1 Personal interview2 Telephone interview3 Mail survey and4 Fax surveyPersonal InterviewPersonal interviewing is one of the prominent methods of data collection. It may be defined as atwo-way systematic conversation between an investigator and an informant, initiated for obtaining
8/6/2019 MB0034 Set1&2
information relevant to a specific study. It involves not only conversation, but also learning fromthe respondents gestures, facial expressions and pauses, and his environment.Interviewing may be used either as a main method or as a supplementary one in studies ofpersons. Interviewing is the only suitable method for gathering information from illiterate or lesseducated respondents. It is useful for collecting a wide range of data, from factual demographicdata to highly personal and intimate information relating to a persons opinions, attitudes, values,beliefs, experiences and future intentions. Interviewing is appropriate when qualitative informationis required, or probing is necessary to draw out the respondent fully. Where the area covered forthe survey is compact, or when a sufficient number of qualified interviewers are available,personal interview is feasible.Interview is often superior to other data-gathering methods. People are usually more willing to talkthan to write. Once rapport is established, even confidential information may be obtained. Itpermits probing into the context and reasons for answers to questions.Interview can add flesh to statistical information. It enables the investigator to grasp thebehavioral context of the data furnished by the respondents. It permits the investigator to seekclarifications and brings to the forefront those questions, which for some reason or the other therespondents do not want to answer. Interviewing as a method of data collection has certaincharacteristics. They are:
1. The participants the interviewer and the respondent are strangers;hence, the investigator has to get himself/herself introduced to the
respondent in an appropriate manner.2. The relationship between the participants is a transitory one. It has a
fixed beginning and termination points. The interview proper is a fleeting,momentary experience for them.
3. The interview is not a mere casual conversational exchange, but aconversation with a specific purpose, viz., obtaining information relevantto a study.
4. The interview is a mode of obtaining verbal answers to questions putverbally.
5. The interaction between the interviewer and the respondent need notnecessarily be on a face-to-face basis, because the interview can also beconducted over the telephone.
6. Although the interview is usually a conversation between two persons, it
need not be limited to a single respondent. It can also be conducted witha group of persons, such as family members, or a group of children, or agroup of customers, depending on the requirements of the study.
7. The interview is an interactive process. The interaction between theinterviewer and the respondent depends upon how they perceive eachother.
8. The respondent reacts to the interviewers appearance, behavior,gestures, facial expression and intonation, his perception of the thrust ofthe questions and his own personal needs. As far as possible, theinterviewer should try to be closer to the social-economic level of therespondents.
9. The investigator records information furnished by the respondent in theinterview. This poses a problem of seeing that recording does not
interfere with the tempo of conversation.10. Interviewing is not a standardized process like that of a chemicaltechnician; it is rather a flexible, psychological process.
3 Telephone Interviewing Telephone interviewing is a non-personal method of data collection. Itmay be used as a major method or as a supplementary method. It will be useful in the followingsituations:
11. When the universe is composed of those persons whose names arelisted in telephone directories, e.g. business houses, businessexecutives, doctors and other professionals.
8/6/2019 MB0034 Set1&2
12. When the study requires responses to five or six simple questions, e.g. aradio or television program survey.
13. When the survey must be conducted in a very short period of time,provided the units of study are listed in the telephone directory.
14. When the subject is interesting or important to respondents, e.g. a surveyrelating to trade conducted by a trade association or a chamber ofcommerce, a survey relating to a profession conducted by the concernedprofessional association.
15. When the respondents are widely scattered and when there are manycall backs to make.
4 Group Interviews A group interview may be defined as a method of collecting primary data inwhich a number of individuals with a common interest interact with each other. In a personalinterview, the flow of information is multi dimensional. The group may consist of about six to eightindividuals with a common interest. The interviewer acts as the discussion leader. Freediscussion is encouraged on some aspect of the subject under study. The discussion leaderstimulates the group members to interact with each other. The desired information may beobtained through self-administered questionnaire or interview, with the discussion serving as aguide to ensure consideration of the areas of concern. In particular, the interviewers look forevidence of common elements of attitudes, beliefs, intentions and opinions among individuals inthe group. At the same time, he must be aware that a single comment by a member can provide
important insight. Samples for group interviews can be obtained through schools, clubs and otherorganized groups.5 Mail Survey The mail survey is another method of collecting primary data. This methodinvolves sending questionnaires to the respondents with a request to complete them and returnthem by post. This can be used in the case of educated respondents only. The mailquestionnaires should be simple so that the respondents can easily understand the questions andanswer them. It should preferably contain mostly closed-ended and multiple choice questions, sothat it could be completed within a few minutes. The distinctive feature of the mail survey is thatthe questionnaire is self-administered by the respondents themselves and the responses arerecorded by them and not by the investigator, as in the case of personal interview method. It doesnot involve face-to-face conversation between the investigator and the respondent.Communication is carried out only in writing and this requires more cooperation from therespondents than verbal communication. The researcher should prepare a mailing list of the
selected respondents, by collecting the addresses from the telephone directory of the associationor organization to which they belong. The following procedures should be followed - a covering letter should accompany a copy of the questionnaire. It must explain to the respondent thepurpose of the study and the importance of his cooperation to the success of the project. Anonymity must be assured. The sponsors identity may be revealed. However, when such information may bias the result, it is not desirable to reveal it. In this case, a disguisedorganization name may be used. A self-addressed stamped envelope should be enclosed in the covering letter.
1 After a few days from the date of mailing the questionnaires to the respondents, the researcher can expect the return of completed ones from them. The progress in return may bewatched and at the appropriate stage, follow-up efforts can be made.
The response rate in mail surveys is generally very low in developing countries like India. Certain
techniques have to be adopted to increase the response rate. They are:11. Quality printing: The questionnaire may be neatly printed on quality light colored paper,
so as to attract the attention of the respondent.
22. Covering letter: The covering letter should be couched in a pleasant style, so as toattract and hold the interest of the respondent. It must anticipate objections and answerthem briefly. It is desirable to address the respondent by name.
33. Advance information: Advance information can be provided to potential respondents bya telephone call, or advance notice in the newsletter of the concerned organization, or by
8/6/2019 MB0034 Set1&2
a letter. Such preliminary contact with potential respondents is more successful thanfollow-up efforts.
44. Incentives: Money, stamps for collection and other incentives are also used to inducerespondents to complete and return the mail questionnaire.
55. Follow-up-contacts: In the case of respondents belonging to an organization, they maybe approached through someone in that organization known as the researcher.
66. Larger sample size: A larger sample may be drawn than the estimated sample size. Forexample, if the required sample size is 1000, a sample of 1500 may be drawn. This mayhelp the researcher to secure an effective sample size closer to the required size.
78Q 6. Case Study: You are engaged to carry out a market survey on behalf of a leading
Newspaper that is keen to increase its circulation in Bangalore City, in order toascertain reader habits and interests. Develop a title for the study; define theresearch problem and the objectives or questions to be answered by the study.
Ans.:Title: Newspaper reading choices
Research problem: A research problem is the situation that causes the researcher to feelapprehensive, confused and ill at ease. It is the demarcation of a problem area within a certain
context involving the WHO or WHAT, the WHERE, the WHEN and the WHY of the problemsituation.
There are many problem situations that may give rise to research. Three sources usuallycontribute to problem identification. Own experience or the experience of others may be a sourceof problem supply. A second source could be scientific literature. You may read about certainfindings and notice that a certain field was not covered. This could lead to a research problem.Theories could be a third source. Shortcomings in theories could be researched.
Research can thus be aimed at clarifying or substantiating an existing theory, at clarifyingcontradictory findings, at correcting a faulty methodology, at correcting the inadequate orunsuitable use of statistical techniques, at reconciling conflicting opinions, or at solving existingpractical problems
Types of questions to be asked :For more than 35 years, the news about newspapers andyoung readers has been mostly bad for the newspaper industry. Long before any competitionfrom cable television or Nintendo, American newspaper publishers were worrying about decliningreadership among the young.
As early as 1960, at least 20 years prior to Music Television (MTV) or the Internet, mediaresearch scholars1 began to focus their studies on young adult readers' decreasing interest innewspaper content. The concern over a declining youth market preceded and perhapsforeshadowed today's fretting over market penetration. Even where circulation has grown orstayed stable, there is rising concern over penetration, defined as the percentage of occupiedhouseholds in a geographic market that are served by a newspaper.2 Simply put, population
growth is occurring more rapidly than newspaper readership in most communities.
This study looks at trends in newspaper readership among the 18-to-34 age group and examinessome of the choices young adults make when reading newspapers.
One of the underlying concerns behind the decline in youth newspaper reading is the question ofhow young people view the newspaper. A number of studies explored how young readersevaluate and use newspaper content.
8/6/2019 MB0034 Set1&2
Comparing reader content preferences over a 10-year period, Gerald Stone and TimothyBoudreau found differences between readers ages 18-34 and those 35-plus.16 Younger readersshowed increased interest in national news, weather, sports, and classified advertisements overthe decade between 1984 and 1994, while older readers ranked weather, editorials, and foodadvertisements higher. Interest in international news and letters to the editor was less amongyounger readers, while older readers showed less interest in reports of births, obituaries, andmarriages.
David Atkin explored the influence of telecommunication technology on newspaper readershipamong students in undergraduate media courses.17 He reported that computer-relatedtechnologies, including electronic mail and computer networks, were unrelated to newspaperreadership. The study found that newspaper subscribers preferred print formats over electronic.In a study of younger, school-age children, Brian Brooks and James Kropp found that electronicnewspapers could persuade children to become news consumers, but that young readers wouldchoose an electronic newspaper over a printed one.18
In an exploration of leisure reading among college students, Leo Jeffres and Atkin assesseddimensions of interest in newspapers, magazines, and books,19 exploring the influence of mediause, non-media leisure, and academic major on newspaper content preferences. The study
discovered that overall newspaper readership was positively related to students' focus onentertainment, job / travel information, and public affairs. However, the students' preference forreading as a leisure-time activity was related only to a public affairs focus. Content preferencesfor newspapers and other print media were related. The researchers found no significantdifferences in readership among various academic majors, or by gender, though there was aslight correlation between age and the public affairs readership index, with older readers moreinterested in news about public affairs.
Participants in this study (N=267) were students enrolled in 100- and 200-level English courses at
a midwestern public university. Courses that comprise the framework for this sample wereselected because they could fulfill basic studies requirements for all majors. A basic studiescourse is one that is listed within the core curriculum required for all students. The researcherobtained permission from seven professors to distribute questionnaires in the eight classes duringregularly scheduled class periods. The students' participation was voluntary; two studentsdeclined. The goal of this sampling procedure was to reach a cross-section of studentsrepresenting various fields of study. In all, 53 majors were represented.
Of the 267 students who participated in the study, 65 (24.3 percent) were male and 177 (66.3percent) were female. A total of 25 participants chose not to divulge their genders. Ages rangedfrom 17 to 56, with a mean age of 23.6 years. This mean does not include the 32 respondentswho declined to give their ages. A total of 157 participants (58.8 percent) said they were of theCaucasian race, 59 (22.1 percent) African American, 10 (3.8 percent) Asian, five (1.9 percent)
African/Native American, two (.8 percent) Hispanic, two (.8 percent) Native American, and one (.4percent) Arabic. Most (214) of the students were enrolled full time, whereas a few (28) were part-time students. The class rank breakdown was: freshmen, 45 (16.9 percent); sophomores, 15 (5.6percent); juniors, 33 (12.4 percent); seniors, 133 (49.8 percent); and graduate students, 16 (6percent).
8/6/2019 MB0034 Set1&2
After two pre-tests and revisions, questionnaires were distributed and collected by theinvestigator. In each of the eight classes, the researcher introduced herself to the students as a
journalism professor who was conducting a study on students' use of newspapers and othermedia. Each questionnaire included a cover letter with the researcher's name, address, andphone number. The researcher provided pencils and was available to answer questions if anyoneneeded further assistance. The average time spent on the questionnaires was 20 minutes, withsome individual students taking as long as an hour. Approximately six students asked to take thequestionnaires home to finish. They returned the questionnaires to the researcher's mailboxwithin a couple of day.
MB0034 - RESEARCH METHODOLOGY
Q 1.Discuss the relative advantages and disadvantages of the different methods ofdistributing questionnaires to the respondents of a study.
Ans.: There are some alternative methods of distributing questionnaires to the respondents.They are:1) Personal delivery,2) Attaching the questionnaire to a product,3) Advertising the questionnaire in a newspaper or magazine, and4) News-stand inserts.Personal delivery: The researcher or his assistant may deliver the questionnaires to thepotential respondents, with a request to complete them at their convenience. After a day or two,the completed questionnaires can be collected from them. Often referred to as the self-administered questionnaire method, it combines the advantages of the personal interview and themail survey. Alternatively, the questionnaires may be delivered in person and the respondentsmay return the completed questionnaires through mail.Attaching questionnaire to a product: A firm test marketing a product may attach aquestionnaire to a product and request the buyer to complete it and mail it back to the firm. A gift
or a discount coupon usually rewards the respondent.Advertising the questionnaire: The questionnaire with the instructions for completion may beadvertised on a page of a magazine or in a section of newspapers. The potential respondentcompletes it, tears it out and mails it to the advertiser. For example, the committee of BanksCustomer Services used this method for collecting information from the customers of commercialbanks in India. This method may be useful for large-scale studies on topics of common interest.Newsstand inserts: This method involves inserting the covering letter, questionnaire and selfaddressed reply-paid envelope into a random sample of newsstand copies of a newspaper ormagazine.Advantages and Disadvantages:The advantages of Questionnaire are:
this method facilitates collection of more accurate data for longitudinal studies than any other method, because under this method, the event or action is reported soon after its occurrence.
this method makes it possible to have before and after designs made for field based studies.
For example, the effect of public relations or advertising campaigns or welfare measures can bemeasured by collecting data before, during and after the campaign.
the panel method offers a good way of studying trends in events, behavior or attitudes. For example, a panel enables a market researcher to study how brand preferences change frommonth to month; it enables an economics researcher to study how employment, income andexpenditure of agricultural laborers change from month to month; a political scientist can studythe shifts in inclinations of voters and the causative influential factors during an election. It is alsopossible to find out how the constituency of the various economic and social strata of societychanges through time and so on.
8/6/2019 MB0034 Set1&2
A panel study also provides evidence on the causal relationship between variables. For example, a cross sectional study of employees may show an association between their attitude totheir jobs and their positions in the organization, but it does not indicate as to which comes first -favorable attitude or promotion. A panel study can provide data for finding an answer to thisquestion.
It facilities depth interviewing, because panel members become well acquainted with the field workers and will be willing to allow probing interviews.
The major limitations or problems of Questionnaire method are:this method is very expensive. The selection of panel members, the payment of premiums,
periodic training of investigators and supervisors, and the costs involved in replacing dropouts, alladd to the expenditure.
it is often difficult to set up a representative panel and to keep it representative. Many persons may be unwilling to participate in a panel study. In the course of the study, there may be frequentdropouts. Persons with similar characteristics may replace the dropouts. However, there is noguarantee that the emerging panel would be representative.
A real danger with the panel method is panel conditioning i.e., the risk that repeated interviews may sensitize the panel members and they become untypical, as a result of being onthe panel. For example, the members of a panel study of political opinions may try to appearconsistent in the views they express on consecutive occasions. In such cases, the panel
becomes untypical of the population it was selected to represent. One possible safeguard topanel conditioning is to give members of a panel only a limited panel life and then to replace themwith persons taken randomly from a reserve list.
the quality of reporting may tend to decline, due to decreasing interest, after a panel has been in operation for some time. Cheating by panel members or investigators may be a problem insome cases.
Q 2. In processing data, what is the difference between measures of central tendency andmeasures of dispersion? What is the most important measure of central tendency anddispersion?
Ans.: Measures of Central tendency:Arithmetic MeanThe arithmetic mean is the most common measure of central tendency. It simply the sum of thenumbers divided by the number of numbers. The symbol m is used for the mean of a population.The symbol M is used for the mean of a sample. The formula for m is shown below: m=X
Where X is the sum of all the numbers in the numbers in the sample and N is the number ofnumbers in the sample. As an example, the mean of the numbers 1+2+3+6+8=20
=4 regardless of whether the numbers constitute the entire population or just a sample from
the population.The table, Number of touchdown passes, shows the number of touchdown (TD) passes thrownby each of the 31 teams in the National Football League in the 2000 season. The mean numberof touchdown passes thrown is 20.4516 as shown below. m=X
8/6/2019 MB0034 Set1&2
=20.451637 33 33 32 29 28 28 2322 22 22 21 21 21 20 2019 19 18 18 18 18 16 1514 14 14 12 12 9 6Table 1: Number of touchdown passes
Although the arithmetic mean is not the only "mean" (there is also a geometric mean), it is by farthe most commonly used. Therefore, if the term "mean" is used without specifying whether it isthe arithmetic mean, the geometric mean, or some other mean, it is assumed to refer to thearithmetic mean.MedianThe median is also a frequently used measure of central tendency. The median is the midpoint ofa distribution: the same number of scores is above the median as below it. For the data in thetable, Number of touchdown passes, there are 31 scores. The 16th highest score (which equals20) is the median because there are 15 scores below the 16th score and 15 scores above the16th score. The median can also be thought of as the 50th percentile.Let's return to the made up example of the quiz on which you made a three discussed previouslyin the module Introduction to Central Tendency and shown inTable 2.
Student Dataset 1 Dataset 2 Dataset 3
You 3 3 3John's 3 4 2Maria's 3 4 2Shareecia's 3 4 2Luther's 3 5 1Table 2: Three possible datasets for the 5-point make-up quiz
For Dataset 1, the median is three, the same as your score. For Dataset 2, the median is 4.Therefore, your score is below the median. This means you are in the lower half of the class.Finally for Dataset 3, the median is 2. For this dataset, your score is above the median andtherefore in the upper half of the distribution.Computation of the Median: When there is an odd number of numbers, the median is simply themiddle number. For example, the median of 2, 4, and 7 is 4. When there is an even number ofnumbers, the median is the mean of the two middle numbers. Thus, the median of the numbers 2,4, 7, 12 is4+7
=5.5.ModeThe mode is the most frequently occurring value. For the data in the table, Number of touchdownpasses, the mode is 18 since more teams (4) had 18 touchdown passes than any other numberof touchdown passes. With continuous data such as response time measured to many decimals,the frequency of each value is one since no two scores will be exactly the same (see discussionof continuous variables). Therefore the mode of continuous data is normally computed from agrouped frequency distribution. The Grouped frequency distribution table shows a groupedfrequency distribution for the target response time data. Since the interval with the highestfrequency is 600-700, the mode is the middle of that interval (650).
Range Frequency500-600 3600-700 6700-800 5800-900 5900-1000 01000-1100 1Table 3: Grouped frequency distribution
8/6/2019 MB0034 Set1&2
Measures of Dispersion: A measure of statistical dispersion is a real numberthat is zero if allthe data are identical, and increases as the data becomes more diverse. It cannot be less thanzero.
Most measures of dispersion have the same scale as the quantity being measured. In otherwords, if the measurements have units, such as metres or seconds, the measure of dispersion
has the same units. Such measures of dispersion include:
Standard deviation Interquartile range Range Mean difference Median absolute deviation Average absolute deviation (or simply called average deviation) Distance standard deviation
These are frequently used (together with scale factors) as estimators of scale parameters, inwhich capacity they are called estimates of scale.
All the above measures of statistical dispersion have the useful property that they are location-invariant, as well as linear in scale. So if a random variableXhas a dispersion ofSX then alineartransformationY= aX+ b forreala and b should have dispersion SY= |a|SX.
Other measures of dispersion are dimensionless (scale-free). In other words, they have nounits even if the variable itself has units. These include:
Coefficient of variation Quartile coefficient of dispersion Relative mean difference, equal to twice the Gini coefficient
There are other measures of dispersion:
Variance (the square of the standard deviation) location-invariant but not linear inscale.
Variance-to-mean ratio mostly used for count data when the term coefficient ofdispersion is used and when this ratio is dimensionless, as count data are themselvesdimensionless: otherwise this is not scale-free.
Some measures of dispersion have specialized purposes, among them the Allan variance andthe Hadamard variance.
For categorical variables, it is less common to measure dispersion by a single number. Seequalitative variation. One measure that does so is the discrete entropy.
Sources of statistical dispersion
In the physical sciences, such variability may result only from random measurement errors:instrument measurements are often not perfectly precise, i.e., reproducible. One may assumethat the quantity being measured is unchanging and stable, and that the variation betweenmeasurements is due to observational error.
8/6/2019 MB0034 Set1&2
In the biological sciences, this assumption is false: the variation observed might be intrinsicto thephenomenon: distinct members of a population differ greatly. This is also seen in the arena ofmanufactured products; even there, the meticulous scientist finds variation.The simple model of astable quantity is preferred when it is tenable. Each phenomenon must be examined to see if itwarrants such a simplification.
Q 3. What are the characteristics of a good research design? Explain how the researchdesign for exploratory studies is different from the research design for descriptive anddiagnostic studies.
Ans.: Good research design:Much contemporary social research is devoted to examiningwhether a program, treatment, or manipulation causes some outcome or result. For example, wemight wish to know whether a new educational program causes subsequent achievement scoregains, whether a special work release program for prisoners causes lower recidivism rates,whether a novel drug causes a reduction in symptoms, and so on. Cook and Campbell (1979)argue that three conditions must be met before we can infer that such a cause-effect relationexists:
1. Covariation. Changes in the presumed cause must be related to changes in the
presumed effect. Thus, if we introduce, remove, or change the level of a treatment orprogram, we should observe some change in the outcome measures.
2. Temporal Precedence. The presumed cause must occur prior to the presumed effect.
3. No Plausible Alternative Explanations. The presumed cause must be the onlyreasonable explanation for changes in the outcome measures. If there are other factors,which could be responsible for changes in the outcome measures, we cannot beconfident that the presumed cause-effect relationship is correct.
In most social research the third condition is the most difficult to meet. Any number of factorsother than the treatment or program could cause changes in outcome measures. Campbell andStanley (1966) and later, Cook and Campbell (1979) list a number of common plausiblealternative explanations (or, threats to internal validity). For example, it may be that somehistorical event which occurs at the same time that the program or treatment is instituted wasresponsible for the change in the outcome measures; or, changes in record keeping ormeasurement systems which occur at the same time as the program might be falsely attributed tothe program. The reader is referred to standard research methods texts for more detaileddiscussions of threats to validity.
This paper is primarily heuristic in purpose. Standard social science methodology textbooks(Cook and Campbell 1979; Judd and Kenny, 1981) typically present an array of research designsand the alternative explanations, which these designs rule out or minimize. This tends to foster a"cookbook" approach to research design - an emphasis on the selection of an available designrather than on the construction of an appropriate research strategy. While standard designs maysometimes fit real-life situations, it will often be necessary to "tailor" a research design tominimize specific threats to validity. Furthermore, even if standard textbook designs are used, anunderstanding of the logic of design construction in general will improve the comprehension ofthese standard approaches. This paper takes a structural approach to research design. While thisis by no means the only strategy for constructing research designs, it helps to clarify some of thebasic principles of design logic.
Minimizing Threats to Validity
Good research designs minimize the plausible alternative explanations for the hypothesizedcause-effect relationship. But such explanations may be ruled out or minimized in a number of
8/6/2019 MB0034 Set1&2
ways other than by design. The discussion, which follows, outlines five ways to minimize threatsto validity, one of which is by research design:
1. By Argument. The most straightforward way to rule out a potential threat to validity is tosimply argue that the threat in question is not a reasonable one. Such an argument maybe made either a priori or a posteriori, although the former will usually be more
convincing than the latter. For example, depending on the situation, one might argue thatan instrumentation threat is not likely because the same test is used for pre and post testmeasurements and did not involve observers who might improve, or other such factors.In most cases, ruling out a potential threat to validity by argument alone will be weakerthan the other approaches listed below. As a result, the most plausible threats in a studyshould not, except in unusual cases, be ruled out by argument only.
2. By Measurement or Observation. In some cases it will be possible to rule out a threatby measuring it and demonstrating that either it does not occur at all or occurs sominimally as to not be a strong alternative explanation for the cause-effect relationship.Consider, for example, a study of the effects of an advertising campaign on subsequentsales of a particular product. In such a study, history (i.e., the occurrence of other eventswhich might lead to an increased desire to purchase the product) would be a plausiblealternative explanation. For example, a change in the local economy, the removal of a
competing product from the market, or similar events could cause an increase in productsales. One might attempt to minimize such threats by measuring local economicindicators and the availability and sales of competing products. If there is no change inthese measures coincident with the onset of the advertising campaign, these threatswould be considerably minimized. Similarly, if one is studying the effects of specialmathematics training on math achievement scores of children, it might be useful toobserve everyday classroom behavior in order to verify that students were not receivingany additional math training to that provided in the study.
3. By Design. Here, the major emphasis is on ruling out alternative explanations by addingtreatment or control groups, waves of measurement, and the like. This topic will bediscussed in more detail below.
4. By Analysis. There are a number of ways to rule out alternative explanations usingstatistical analysis. One interesting example is provided by Jurs and Glass (1971). They
suggest that one could study the plausibility of an attrition or mortality threat byconducting a two-way analysis of variance. One factor in this study would be the originaltreatment group designations (i.e., program vs. comparison group), while the other factorwould be attrition (i.e., dropout vs. non-dropout group). The dependent measure could bethe pretest or other available pre-program measures. A main effect on the attrition factorwould be indicative of a threat to external validity or generalizability, while an interactionbetween group and attrition factors would point to a possible threat to internal validity.Where both effects occur, it is reasonable to infer that there is a threat to both internaland external validity.
The plausibility of alternative explanations might also be minimized using covarianceanalysis. For example, in a study of the effects of "workfare" programs on social welfarecaseloads, one plausible alternative explanation might be the status of local economic
conditions. Here, it might be possible to construct a measure of economic conditions andinclude that measure as a covariate in the statistical analysis. One must be careful whenusing covariance adjustments of this type -- "perfect" covariates do not exist in mostsocial research and the use of imperfect covariates will not completely adjust for potentialalternative explanations. Nevertheless causal assertions are likely to be strengthened bydemonstrating that treatment effects occur even after adjusting on a number of goodcovariates.
8/6/2019 MB0034 Set1&2
5. By Preventive Action. When potential threats are anticipated some type of preventiveaction can often rule them out. For example, if the program is a desirable one, it is likelythat the comparison group would feel jealous or demoralized. Several actions can betaken to minimize the effects of these attitudes including offering the program to thecomparison group upon completion of the study or using program and comparisongroups which have little opportunity for contact and communication. In addition, auditing
methods and quality control can be used to track potential experimental dropouts or toinsure the standardization of measurement.
The five categories listed above should not be considered mutually exclusive. The inclusion ofmeasurements designed to minimize threats to validity will obviously be related to the designstructure and is likely to be a factor in the analysis. A good research plan should, where possible.make use of multiple methods for reducing threats. In general, reducing a particular threat bydesign or preventive action will probably be stronger than by using one of the other threeapproaches. The choice of which strategy to use for any particular threat is complex and dependsat least on the cost of the strategy and on the potential seriousness of the threat.
Basic Design Elements. Most research designs can be constructed from four basic elements:
1. Time. A causal relationship, by its very nature, implies that some time has elapsedbetween the occurrence of the cause and the consequent effect. While for somephenomena the elapsed time might be measured in microseconds and therefore might beunnoticeable to a casual observer, we normally assume that the cause and effect insocial science arenas do not occur simultaneously, In design notation we indicate thistemporal element horizontally - whatever symbol is used to indicate the presumed causewould be placed to the left of the symbol indicating measurement of the effect. Thus, aswe read from left to right in design notation we are reading across time. Complex designsmight involve a lengthy sequence of observations and programs or treatments acrosstime.
2. Program(s) or Treatment(s). The presumed cause may be a program or treatmentunder the explicit control of the researcher or the occurrence of some natural event orprogram not explicitly controlled. In design notation we usually depict a presumed causewith the symbol "X". When multiple programs or treatments are being studied using thesame design, we can keep the programs distinct by using subscripts such as "X1" or "X2".For a comparison group (i.e., one which does not receive the program under study) no"X" is used.
3. Observation(s) or Measure(s). Measurements are typically depicted in design notationwith the symbol "O". If the same measurement or observation is taken at every point intime in a design, then this "O" will be sufficient. Similarly, if the same set of measures isgiven at every point in time in this study, the "O" can be used to depict the entire set ofmeasures. However, if different measures are given at different times it is useful tosubscript the "O" to indicate which measurement is being given at which point in time.
4. Groups or Individuals. The final design element consists of the intact groups or theindividuals who participate in various conditions. Typically, there will be one or moreprogram and comparison groups. In design notation, each group is indicated on aseparate line. Furthermore, the manner in which groups are assigned to the conditionscan be indicated by an appropriate symbol at the beginning of each line. Here, "R" willrepresent a group, which was randomly assigned, "N" will depict a group, which wasnonrandom assigned (i.e., a nonequivalent group or cohort) and a "C" will indicate thatthe group was assigned using a cutoff score on a measurement.
8/6/2019 MB0034 Set1&2
Q 4. How is the Case Study method useful in Business Research? Give two specificexamples of how the case study method can be applied to business research.
Ans.: While case study writing may seem easy at first glance, developing an effective case study(also called a success story) is an art. Like other marketing communication skills, learning how towrite a case study takes time. Whats more, writing case studies without careful planning usuallyresults in sub optimal results?Savvy case study writers increase their chances of success by following these ten proventechniques for writing an effective case study:
Involve thecustomerthroughout the
process. Involving the customer throughout the case study development process helps ensurecustomer cooperation and approval, and results in an improved case study. Obtain customerpermission before writing the document, solicit input duringthe development, and secureapproval afterdrafting the document.
Write all customer quotes for their review. Rather than asking the customer to drafttheir quotes, writing them for their review usually results in more compelling material.
Case Study Writing Ideas
Establish a document template. A template serves as a roadmap for the case studyprocess, and ensures that the document looks, feels, and reads consistently. Visually, thetemplate helps build the brand; procedurally, it simplifies the actual writing. Beforebeginning work, define 3-5 specific elements to include in every case study, formalizethose elements, and stick to them.
Start with a bang. Use action verbs and emphasize benefits in the case study title andsubtitle. Include a short (less than 20-word) customer quote in larger text. Then,
8/6/2019 MB0034 Set1&2
summarize the key points of the case study in 2-3 succinct bullet points. The goal shouldbe to tease the reader into wanting to read more.
Organize according to problem, solution, and benefits. Regardless of length, thetime-tested, most effective organization for a case study follows the problem-solution-benefits flow. First, describe the business and/or technical problem or issue; next,describe the solution to this problem or resolution of this issue; finally, describe how the
customer benefited from the particular solution (more on this