Design of medical clinical trials The concept of design in translation from English (design) means a plan, project, sketch, construction. Qualitative and quantitative research methods in evidence-based medicine. Clinical trials, definition, classification. Statistical analysis in evidence-based medicine. Evidence levels and grades of recommendation for clinical trial results

A clinical trial is any prospective study in which patients are included in an intervention or comparison group to determine a causal relationship between a medical intervention and clinical outcome. This is the final stage of clinical research in which the validity of new theoretical knowledge is tested. CI design - a way of conducting scientific research in a clinic, that is, its organization or architecture.

The CI design type is a set of classification features that correspond to: 1) certain typical clinical tasks; 2) research methods; 3) methods of statistical processing of results.

Classification of studies by design Observational studies (observation) is a study in which one or more groups of patients are described and observed according to certain characteristics, and the researcher collects data by simply observing events in their natural course, without actively interfering with them; Experimental studies - the results of an intervention (drug, procedure, treatment, etc.) are evaluated, one, two or more groups are involved. The subject of research is being observed.

1. Observational ↓ Descriptive Analytical ↓ Reporting Case-Control Cases Cohort 2. Experimental ↓ Clinical Trials

The most important requirements for medical research Correct organization(design) of the study and a mathematically sound method of randomization. Clearly defined and followed criteria for inclusion and exclusion from the study. Right choice criteria for the outcome of the disease under the influence of treatment and without it. Location of the study Continue the study Correct use of statistical processing methods

General principles of classical scientific research. Clinical Trials Controlled - Comparison of a drug or procedure with other drugs or procedures - More common, more likely to detect a difference in treatment Uncontrolled - Experience with a drug or procedure, but not compared to another treatment option - Less common, less reliable - Likely to be performed to compare procedures more than the comparison drug

Types of clinical questions facing the physician in caring for a patient The main categories of clinical questions are: prevalence of diseases, risk factors, diagnosis, prognosis, and treatment efficacy. Abnormality - Healthy or Sick? Diagnosis - How accurate is the diagnosis? Frequency - How common is the disease? Risk - What factors are associated with an increased risk of disease?

Prognosis - What are the consequences of the disease? Treatment - How will the course of the disease change with treatment? Prevention - Are there methods to prevent disease in healthy people? Is the course of the disease improved with early recognition and treatment? Cause - What factors lead to the disease? Cost - How much does it cost to treat this condition?

Types of medical research Systematic reviews, meta-analyzes Randomized clinical researches(RCT) Cohort studies Case-control study Case series, single case description In vitro and animal studies

Systematized Reviews (SRs) are scientific work where the object of the study is the results of a number of original studies on one problem, i.e., the results of these studies are analyzed using approaches that reduce the possibility of systematic and random errors; are a generalization of the results of various studies on a given topic and are among the most "read" options scientific publications, because they allow you to quickly and most fully get acquainted with the problem of interest. The goal of JI is a balanced and impartial study of the results of previous studies.

A qualitative systematic review reviews the results of original research on a single issue or system, but does not perform statistical analysis.

Meta-analysis is the pinnacle of evidence and solid scientific research: quantifying the cumulative effect based on the results of all scientific studies (H. Davies, Crombie I. 1999); a quantitative systematic review of the literature or a quantitative synthesis of primary data to obtain summary statistics.

Randomized controlled trials (studies) - RCTs RCTs - in modern medical science are a generally accepted standard of scientific research for evaluating clinical effectiveness... Randomization is a technique used to generate a sequence of random assignment of test subjects to groups (rand - French - case). RCT - Treatment Evaluation Criteria

Research structure in RCT 1. Availability of a control group 2. Clear selection criteria (inclusion and exclusion) of patients 3. Inclusion of patients in the study before randomization into groups 4. Random method of allocating patients into groups (randomization) 5. "Blind" treatment 6. " Blind "evaluation of treatment results

Study design - presentation of results 7. Information on complications and side effects treatment 8. Information about the number of patients who dropped out during the experiment 9. Adequate statistical analysis, there are links to the use of the article, program, etc. 10. Information about the size of the revealed effect and the statistical power of the study

RCT - comparison of the final results should be carried out in two groups of patients: Control group - no treatment or standard, traditional (conventional) treatment is carried out or patients receive placebo; Active treatment group - treatment is carried out, the effectiveness of which is being investigated.

A placebo is an indifferent substance (procedure) for comparing its effect with the effects of a real drug or other intervention. In CI, placebo is used using a blind method so that participants do not know what treatment they are prescribed (Maltsev V., et al., 2001). Placebo control technology is ethical in cases where the subject is not significantly harmed by going without medication.

Active control - a drug is used that is effective in relation to the indicator under study (more often the “gold standard” drug is used - well-studied, long and widely used in practice).

Homogeneity of compared groups - patient groups should be comparable and homogeneous in: Clinical features of the disease and comorbidities Age, gender, race

Representativeness of groups The number of patients in each group should be sufficient to obtain statistically significant results. The distribution of patients into groups should be randomized, that is, by a random sampling method that excludes all possible differences between the compared groups that could potentially affect the study result.

Blinding method - to minimize the conscious or unconscious possibility of influence on the research results by its participants, that is, in order to exclude the subjective factor, the method of "blinding" is used in evidence-based medicine.

Types of "blindness" Simple "blind" (single - blind) - the patient does not know about belonging to a certain group, but the doctor knows; Double "blind" (doubl - blind) - the patient and the doctor do not know about belonging to a certain group; Triple-blind - the patient, doctor and organizers do not know about belonging to a certain group (statistical processing) Open-label research - all research participants are aware

RCT results - should be practically meaningful and informative: This can be done only with a sufficiently long observation of patients and a low number of patients' refusals to continue participating in the study (<10%).

True criteria for the effectiveness of treatment - Primary - the main indicators associated with the patient's life (death from any cause or the main one - the disease under study, recovery from the disease under study) - Secondary - improving the quality of life, reducing the frequency of complications, alleviating the symptoms of the disease - Surrogate (indirect), tertiary - the results of laboratory and instrumental studies, which are assumed to be associated with true endpoints, i.e., with primary and secondary.

Randomized clinical trials - objective outcome criteria should be used: Mortality from this disease Overall mortality The incidence of "major" complications Frequency of readmission rates Assessment of the quality of life

Cohort study (cohort group) A group of patients is selected for a similar feature that will be traced in the future Starts with the assumption of a risk factor Patient groups: - exposed to a risk factor - not exposed to a risk factor Prospective (in the future) definition of the required factors in exposed group Answers the question: “Will people get sick (in the future) if they are exposed to a risk factor? ". Mostly prospective, but there are also retrospective ones. Both groups are monitored in the same way. Outcome Assessments Historical cohorts - case-based cohort selection and current follow-up.

Case-control study A study designed to determine the relationship between a risk factor and clinical outcome. Such a study compares the proportion of participants who experienced an adverse effect in two groups, one of which developed and the other did not observe the clinical outcome under study. Main and control groups belong to the same risk population Main and control groups should be equally exposed Disease classification at t = 0 Exposure is measured the same in both groups May be the foundation of new scientific research, theories

Case - control study (retrospective): - At the beginning of the study, the outcome is not known - Cases: presence of disease or outcome - Control: no disease or outcome - Answers the question: “What happened? "-This is a longitudinal or longitudinal study

Case Series Study or Descriptive Study Case Series Description - Study of the same intervention in selected sequential patients without a control group For example, a vascular surgeon can describe the results of carotid revascularization in 100 patients with cerebral ischemia Describes a number of characteristics of interest in small groups observed of patients Relatively short study period Does not include any research hypotheses Does not have control groups Precedes other studies This type of study is limited to data on individual patients

Theoretical Validation in Sociological Research: Methodology and Methods

In the social sciences, there are various types of research and, accordingly, opportunities for the researcher. Knowing about them will help you solve the most difficult problems.

0 Click, if useful = b

Research Strategies
In the social sciences, it is customary to distinguish two of the most general research strategies - quantitative and qualitative.
Quantitative strategy involves using a deductive approach to test hypotheses or theories, draws on the positivist approach of the natural sciences, and is objectivist in nature. A qualitative strategy focuses on the inductive approach to the development of theories, rejects positivism, focuses on an individual interpretation of social reality and is constructivist in nature.
Each of the strategies involves the use of specific methods for collecting and analyzing data. A quantitative strategy is based on the collection of numerical data (coding of survey data, aggregated test data, etc.) and the use of mathematical statistics methods for their analysis. In turn, a qualitative strategy is based on the collection of textual data (texts of individual interviews, data from included observation, etc.) and their further structuring using special analytical techniques.
Since the early 1990s, a mixed strategy has begun to evolve, which is to integrate the principles, methods of collecting and analyzing data from qualitative and quantitative strategies in order to obtain more valid and reliable results.

Research designs
Once the research objective has been determined, the appropriate design type must be determined. Research design is a combination of data collection and analysis requirements necessary to achieve the research objectives.
The main types of design:
Cross-sectional design involves collecting data on a relatively large number of observation units. As a rule, it involves the use of a sampling method in order to represent the general population. Data is collected once and is quantitative. Next, descriptive and correlation characteristics are calculated, and statistical conclusions are drawn.
Longitudinal design consists of repeated cross-sectional interviews to identify changes over time. It is divided into panel studies (the same people take part in repeated surveys) and cohort studies (different groups of people who represent the same general population take part in repeated surveys).
The experimental design provides for the identification of the influence of the independent variable on the dependent variable by leveling threats that can affect the nature of the change in the dependent variable.
The case study design is intended for a detailed study of one or a small number of cases. At the same time, the emphasis is not on the distribution of the results to the entire general population, but on the quality of theoretical analysis and explanation of the mechanism of functioning of a particular phenomenon.

Research Objectives
Among the goals of social research are description, explanation, assessment, comparison, analysis of relationships, the study of cause-and-effect relationships.
Descriptive tasks are solved by simply collecting data using one of the methods appropriate in a given situation - questioning, observation, document analysis, etc. One of the main tasks in this case is such a fixation of data, which in the future will allow for their aggregation.
To solve explanatory problems, a number of research approaches (for example, historical research, case studies, experiments) are used to deal with the analysis of complex data. Their goal is not only a simple collection of facts, but also the identification of the meanings of a large set of social, political, cultural elements associated with the problem.
The general purpose of evaluation studies is to test programs or projects for awareness, effectiveness, achievement of goals, etc. The results obtained are usually used to improve them, and sometimes simply to better understand the functioning of the respective programs and projects.

Comparative studies are used to gain a deeper understanding of the phenomenon under study by identifying its common and distinctive features in various social groups. The most ambitious of them are held in cross-cultural and cross-national contexts.
Studies to establish relationships between variables are also called correlation studies. The result of such studies is the receipt of specific descriptive information (for example, see the analysis of paired bonds). This is a fundamentally quantitative study.
Establishing causal relationships requires experimental research. In the social and behavioral sciences, there are several types of this kind of research: randomized experiments, true experiments (presuppose the creation of special experimental conditions that simulate the necessary conditions), sociometry (of course, as J. Moreno understood it), garfinkeling.

SCIENTIFIC RESEARCH IN SOCIOLOGY, ITS ORGANIZATION

Every research begins with a basic question: why things are as we see them. We are looking for an explanation of the phenomena we observe. Where to begin?

First of all, with search for the necessary literature... If we are lucky, this search leads to a ready-made explanation in the form of a theory - a theory formulated by someone who has observed similar phenomena before us. More often than not, we have to use literature in a more creative manner, trying to construct the most appropriate possible explanation. The rest of the research process is then devoted to testing this explanation: in order to understand how much it gives to our understanding - the understanding of the essence of the phenomenon under investigation.

The first step in this process verification our theory consists in the formulation of certain hypotheses that, from a logical point of view, must correspond to reality - if our initial assumptions about the essence of the observed phenomenon are observed. These - workers - hypotheses serve for the following:

- they determine those variables that will appear in our study;

- they dictate the ways and methods of organizing research in the most optimal way - from the point of view of obtaining irrefutable evidence of the correctness of our understanding - way.

If our theory is a prototype building, then a separate working hypothesis is element of this building. A necessary brick of this particular building- the theory we use. The working hypothesis explains one of the possible connections that form - in a complex - the process we are investigating.

When formulating a hypothesis, it is necessary to be aware of whether it is practically possible to observe the connection of phenomena explained by it. Will we be able to find the data we need, do we have the ability to do this? It seems imperative that the researcher choose hypotheses that can be adequately tested - given the time, resources and abilities of the researcher himself. Otherwise, we will fail.

Then the variables used in the study should be operationalized in such a way that they can be worked with, and as a result, it is possible to draw conclusions that are significant for our study. Here, again, the question of resources arises - if we do not have the time, the money necessary to carry out measurements, assistance (from, say, persons participating in the public opinion poll), there is no point in getting down to work. In addition, it is necessary to ask oneself a question: is there a substitution of concepts in the process of research in connection with the use of an unacceptable method? The scientific value of the method must be analyzed very biasedly even before we start collecting data, because no matter how carefully they are collected, the unsuitability of the research method can devalue the results of the research.


In developing the method of our research, we must also think about the analysis of the collected data ahead of us. The researcher must determine, based on the accepted working hypothesis, what specific mathematical and statistical comparisons will be needed to test it. The main problem here is to find correct ratio between the measurement level resulting from the accepted operationalization of variables, and the level of measurement adopted in those standard statistical procedures that will used in research; that is, the data obtained during the collection should be suitable for use in the statistical processing. It is necessary to make sure that they are not only the data that is commonly used in these procedures, but that they are also accurate enough to be processed. The distribution of the obtained data must also follow the standard statistical distribution, otherwise it will be difficult to process.

The next step is design, designing our research in such a way that the procedure for measuring, collecting data was applied with the greatest efficiency. The main task of design is to make sure, to be completely sure that the connection between the phenomena that we observe, is explained by our working hypothesis, and is not a random phenomenon or the product of a completely different system of relationships. Alternative working hypotheses should be rejected - and not without proof, but on the basis of serious analysis. Therefore, good design begins, first of all, by reviewing the literature related to our field of research. This literary review, a survey - together with a logical analysis of the situation - should aim to reject other possible working hypotheses even before we give space for our own explanation of the observed phenomena.

Research design should be developed with:

1) identification of comparisons used in testing a working hypothesis;

2) determining what kind of observations should be carried out (who or what, in what order, by what means, under what conditions);

3) determining the place of the data collected during the comparative study (no connection, positive connection, negative connection, etc.);

4) identifying major competing hypotheses that also claim to explain a possible research outcome, and

5) organizing a set of observations in such a way that additional comparisons (testing the applicability of the main competing hypotheses) are made (regardless of the actual results of the study).

When choosing the design of our study, it is necessary to know which statistical methods of analysis it is desirable to use, since design determines the nature of the data collected. In the process of designing our research, as well as in choosing a hypothesis and choosing a method, it is absolutely necessary to ask ourselves: is the task we have set out to be overwhelming given our resources, time and our abilities. The best design will do nothing if we don't have the ability to implement it. Therefore, one must be careful about the cost and logic of the data collection process in the study design process.

COLLECTION AND ANALYSIS OF DATA

As mentioned above, data collection and analysis are aimed at testing the validity of the working hypothesis. The following should be noted here.

Various methods of data collection can be applied both individually and in combination. Different methods serve different purposes. A researcher can, for example, engage in direct observation of a certain political group in order to collect general information in order to develop a working hypothesis, come to some preliminary conclusions, and then, in order to obtain accurate data, test this hypothesis, resort to a survey. ... Moreover, the use of several methods in one study increases the scientific value of its result. For example, in studying the variations in the quality of utilities around a city, one may find it desirable to corroborate the results obtained through public opinion polls — statistics, official documents, interviews with officials, and the judgments of professionally trained observers. If all of these methods of data collection lead to the same results regarding the relative position of each of these areas on the service quality scale, the researcher can be confident in their applicability to the task at hand.

Empirical research can take on the character of discovery. Instead of testing hypotheses arising from the explanations accepted by the researcher, he can collect data that provide the basis for fundamentally new interpretations - usually each study leads to new questions, suggests new explanations, and leads to new research.

DEFINITION OF THE SCIENTIFIC VALUE OF RESEARCH

In the design of your own or in the evaluation of someone else's research, it is important to be able to assess whether it meets general but clearly defined criteria of objective value. The list below is rather broad, and a separate study may contain some minor technical errors. But if a researcher is able to answer these questions positively (at least in the main), he can be sure that his project is free from fundamental mistakes that invalidate the significance of the work done.

1. Is the question to be answered correctly formulated? Do we know the objectives of the research in their entirety? Is the research related to a more fundamental question or problem? Is the object of research important?

2. Have the main objects of analysis been correctly selected, clearly identified and consistently applied?

3. Are the concepts on which the research is based clearly formulated and adequately used? Where are they taken from?

4. Is it clear which explanations need to be verified? If a theory is used, is it logically correct? Where is the source of the theory and its constituent explanations?

5. Is the theory or explanation consistent with existing literature on the subject? Has the literature been studied in detail? Is the project related to previous research or more fundamental research questions?

6. Are working hypotheses clearly identified and formulated? Do they follow logically from the explanation or theory being tested? Are they subject to empirical verification?

7. If more than one hypothesis is tested, what is the relationship between them? Are all hypotheses related to theory, is their role in testing the theory obvious?

8. Are all variables clearly defined and their status (dependent or independent) formulated in the working hypothesis?

9. Did the study include variables that could modify the hypothesized relationship?

10. Are the concepts clearly operationalized? Are measurement procedures detailed so that others can use them? Have they been used by other researchers?

11. Can these procedures be relied upon as being fully consistent with the object of analysis? Have they been verified in this regard?

12. Is the study design clearly defined and appropriate for the task of testing a working hypothesis? Is attention paid to alternative competing hypotheses, and is the design process created to test them in light of possible alternative explanations? Is there a coherent basis for identifiable relationships?

13. Is the “population” of interest to the researcher defined correctly? Is the sample representative? If not, is the researcher aware of the limitations this places on his results? Is the sampling procedure adequately explained?

14. Is the data collection technique (survey, content analysis, etc.) appropriate for the purpose of the study with its objects of study and the type of information collected? Have you followed all the rules for this method of collecting information?

15. Is the data collection process clearly presented? Are their sources fully identified and can others identify them?

16. Is the chosen coding system fully defined and justified (such as bringing certain income groups into more general categories or interpreting “in support” or “no” answers?).

17. Is the construction of the scales or indices used in the study explained? Are they one-dimensional? Does their use retain the original meaning of the concepts?

18. Have the tools been checked?

19. Have there been any attempts to verify the results against other sources?

20. Is the graphics appropriate to the nature of the data collected? Is this noted in the text? Do the tables and graphs distort the results obtained?

21. Are these graphs and tables easy to interpret?

22. Is their proposed interpretation correct?

23. Is the statistical method of data processing chosen correctly? Is it suitable for summarizing them in tables and graphs?

24. When examining the relationship between variables, does the researcher provide data on their strength, direction, shape and meaning?

26. Is the level of statistics used consistent with the level of the selected variables, as well as the purpose of the study?

27. Do the obtained data correspond to the capabilities of the method and how is it shown by the researcher?

28. Does the researcher confuse the concepts of statistical and substantive significance of the results obtained? Is he not using them one instead of the other?

29. Have statistically alternative hypotheses been investigated, have the results of this research been correctly presented and interpreted?

30. Is each stage of data analysis related to the main conclusion of the study? Are the proposed interpretations consistent with the original theory or explanation?

31. Does the research report contain:

a) a clear formulation of the objectives of the study;

b) the necessary literature review to demonstrate the place of research in the general context of this direction in science;

c) adequate explanation of design, data and research methods;

d) clear wording of conclusions?

32. Are the conclusions reached supported by the data presented and the choice of study design? Is it a significant contribution to the literature on the problem, or is it too general?

It should be emphasized that the above criteria for the scientific value of research have a very wide field of application - they are by no means tied to sociology - they are universal.

Topics for essays

1. The program of political and sociological research is an increment of new knowledge to the already existing one.

2. The hypothesis is the locomotive of political and sociological research.

3. Types of sociological research - how many can there be?

4. Interpretation of basic concepts - what method of philosophical knowledge is analogous to this interpretation?

5. Problematic situation, its significance in the program of political and sociological research.

Review questions and tasks

1.What does any serious research start with? Why?

2. What role does theory play in research? What is the relationship between theory and a working hypothesis?

3. What dictates the choice methodology research? Is it not accidental? Justify.

4. Why does the use of several methods in one study increase its value? Give examples.

5. What is study design? What should be guided by when choosing a design?

6. What does the term mean research correctness? How is it determined?

7. What numerical methods are used in applied sociology? What is the criterion for their selection?

8. What is the difference between statistical and substantive the significance of the result obtained?

9. What kind ethical problems can arise in the course of sociological research and how should they be resolved?

Theoretical Validation in Sociological Research: Methodology and Methods

The very essence of blended research is research designs. Having gone almost all the way through the "Study Materials", you are ready to receive this lesson too.

0 Click, if useful = b

Research design is a combination of data collection and analysis requirements necessary to achieve research objectives. If we talk about IST, then the corresponding research designs are related, first of all, to the peculiarities of combinatorics of elements of the qualitative and quantitative approaches within the framework of one study.
The main principles of organizing designs in ICT are: 1) awareness of the theoretical drive of the research project; 2) awareness of the role of borrowed components in a research project; 3) adherence to methodological assumptions of the base method; 4) work with the maximum available number of data sets. The first principle has to do with the purpose of research (search vs confirmation), the appropriate kinds of scientific reasoning (induction vs deduction), and the appropriate methods. According to the second principle, the researcher should pay attention not only to the basic strategies for collecting and analyzing data, but also to additional ones, which could enrich the main part of the research project with data that are important and cannot be obtained using the basic methods. The third principle is related to the need to adhere to the fundamental requirements of working with data of one type or another. The essence of the latter principle is quite obvious and has to do with drawing data from all available relevant sources.
Often ISTs are "placed" on a continuum between qualitative and quantitative research (see Figure 4.1). So, in the figure presented, zone "A" denotes the use of exclusively qualitative methods, zone "B" - mainly qualitative, with some quantitative components, zone "C" - the equivalent use of qualitative and quantitative methods (fully integrated studies), zone "D" - mainly quantitative with some qualitative components, zone "E" - exclusively quantitative methods.


Fig. Qualitative-mixed-quantitative continuum

If we talk about specific ICT designs, then there are two main typologies. One is suitable for the case when qualitative and quantitative methods are used at different stages of one research, the other is suitable for the case when alternating or parallel qualitative and quantitative research is used in the research project.
The first typology includes six mixed designs (see Table 4.2). An example of a study that uses qualitative and quantitative methods at different stages is concept alignment. In this research strategy, data collection is carried out using qualitative methods (such as brainstorming or focus groups) and the analysis is quantitative (cluster analysis and multivariate scaling). Depending on the tasks to be solved (search or descriptive), it can be attributed to either the second or the sixth design.
According to the second typology, nine designs of the mixed type can be distinguished (see Table 3). This typology is based on two main principles. First, in a mixed-type study, it is important to determine the status of each of the paradigms - whether qualitative and quantitative studies have the same status, or whether one of them is considered as the main one, and the second is a subordinate one. Secondly, it is important to determine how the research will be conducted - in parallel or sequentially. In the case of a sequential decision, it is also necessary to determine which of them is the first and which is the second in the time dimension. An example of a research project that fits the scope of this typology is the case when, in the first phase, a qualitative research is carried out in order to build a theory (for example, using Anselm Strauss's “grounded theory”), and in the second, a quantitative survey of a specific group of people to which the developed theory is applicable and in relation to which it is necessary to formulate a forecast of the development of the corresponding social phenomenon or problem.

Table 1. Designs of mixed studies, using qualitative and quantitative methods within the same study *

Research Objectives

Data collection

Data analysis

Qualitative goals

Good data collection

Quantitative data collection

Good data collection

Quantitative analysis

Quantitative data collection

Qualitative analysis

Quantitative targets

Good data collection

Qualitative analysis

Quantitative data collection

Quantitative analysis

Good data collection

Quantitative analysis

Quantitative data collection

Qualitative analysis

* In this table, designs 2-7 are mixed, design 1 is entirely qualitative, design 8 is fully quantitative.

Table 2. Designs of mixed studies, using qualitative and quantitative research as different phases of one research project *

* "quality" means qualitative research, "quantity" - quantitative; "+" - simultaneous research, "=>" - sequential; capital letters denote the main status of the paradigm, small letters - subordinate.

Of course, these typologies are not limited to the whole variety of research designs, and they should be considered as possible guidelines in the planning of ICT.
ICT designs in evaluative research.
According to the typology of ICT designs used in assessment, two main types can be distinguished - component and integrative. In component design, although qualitative and quantitative methods are used within the same study, they are separate from each other. By contrast, in integrative design, methods belonging to different paradigms are used together.
The component type includes three types of designs: triangulated, complementary, and expansive. In triangulation design, results from one method are used to validate results from other methods. In the case of a complementary design, the results obtained using the main method are specified and refined based on the results obtained using methods that are of secondary importance. When using expansive design, different methods are used to obtain information on different aspects of the assessment, that is, each method is responsible for a specific piece of information.
The integrative type includes four types of designs: iterative, non-static, holistic, and transformational. In iterative design, the results obtained using a method suggest or direct the use of other methods that are relevant in a given situation. Untested design is associated with situations where one of the methods is integrated into another. Holistic design involves the combined, integrated use of qualitative and quantitative methods to comprehensively evaluate a program. Moreover, both groups of methods have an equivalent status. Transformational design takes place when different methods are used together to fix value views, which are then used to reconfigure a dialogue whose participants adhere to different ideological positions.

In UX design, research is a fundamental part of solving relevant problems and / or reducing to the “right” problems that users face. A designer's job is to understand their users. This means going beyond initial assumptions to put yourself in the shoes of other people in order to create products that meet the needs of the person.

Good research doesn't just end up with good data, it ends up with good design and functionality that users love, want, and need.

Design research is often overlooked as designers focus on what design looks like. This leads to a superficial understanding of the people for whom it is intended. Having such a mindset is contrary to what isUX... It's user-centric.

UX design is centered around research to understand people's needs and how the products or services we create will help them.

Here are some research techniques that every designer should know when starting a project, and even if he is not doing research, he can communicate better with UX researchers.

Primary research

Primary research essentially boils down to new data to understand who you are designing for and what you plan to design for. This allows us to test our ideas with our users and develop more meaningful solutions for them. Designers usually collect such data through interviews with individuals or small groups, using surveys or questionnaires.

It's important to understand what you want to research before you stop looking for people, and the type or quality of the data you want to collect. In an article from the University of Surrey, the author draws attention to two important points to consider when conducting primary research: validity and practicality.

Validity of data refers to the truth, it is what it tells about the studied subject or phenomenon. It is possible that the data is reliable without being well founded.

The practical aspects of the research should be carefully considered when designing the research project, for example:

- cost and budget
- time and scale
- sample size

Bryman in his book Social Research Methods(2001) identifies four types of validity that can influence the results obtained:

  1. Validity of measurement or validity of design: whether the measure being measured uses what it claims.

That is, do church attendance statistics really measure the strength of religious belief?

  1. Internal validity: refers to causality and determines whether the conclusion of a study or theory is developed as a true reflection of causes.

That is, is unemployment really the cause of crime, or are there other explanations?

  1. External validity: considers whether the results of a particular study can be generalized to other groups.

That is, if one kind of community development approach is used in this region, will it have the same impact elsewhere?

  1. Environmental soundness: considers “... social scientific outcomes are appropriate for the everyday natural environment of people” (Bryman, 2001)

That is, if the situation is observed in a false situation, how can this affect the behavior of people?

Secondary research

Secondary research uses existing data such as the Internet, books, or articles to support your design choices and the context behind your design. Secondary studies are also used as a means to further validate information from primary studies and create a stronger case for overall design. Typically, secondary studies have already summarized the analytical picture of existing studies.

It's okay to only use secondary research to evaluate your design, but if you have the time I would definitely recommended doing primary research along with secondary research to really understand who you are developing and collecting ideas for that are more relevant and compelling than existing data. When you collect user data specific to your design, it will generate better ideas and a better product.

Evaluation studies

Evaluation studies describe a specific problem to ensure usability and justify it by the needs and desires of real people. One way to conduct evaluative research is to use your product and give them questions or assignments to reason out loud as they try to complete a task. There are two types of assessment studies: summing and forming.

Summative evaluation study... Summative evaluation aims to understand the results or effects of something. She emphasizes the result more than the process.

Pooled research can measure things like:

  • Finance: Impact in terms of costs, savings, profits, etc.
  • Impact: broad effect, both positive and negative, including depth, spread and time factor.
  • results: Whether desired or unwanted effects have been achieved.
  • Secondary analysis: Analyze existing data for more information.
  • Meta-analysis: integration of results from multiple studies.

Formative evaluative research... Formative assessment is used to help strengthen or improve a person or item being tested.

Formative research can measure things like:

  • Implementation: monitoring the success of a process or project.
  • Needs: a look at the type and level of need.
  • Potential: the ability to use information to form a goal.

Exploratory research


Combining pieces of data and making sense of them is part of the exploratory research process.

Exploratory research is conducted around a topic that little or no one knows about. The purpose of exploratory research is to gain a deep understanding and familiarity with this topic, immersing yourself as much as possible in it, in order to create direction for the potential use of this data in the future.

With exploratory research, you have the opportunity to get new ideas and create worthy solutions to your most significant problems.

Exploratory research allows us to validate our assumptions about a topic that is often overlooked (i.e. prisoners, homeless people), providing an opportunity to generate new ideas and developments for existing problems or opportunities.

Based on an article from Lynn University, exploratory research tells us that:

  1. Design is a convenient way to get background information on a specific topic.
  2. Exploratory research is flexible and can address research questions of all types (what, why, how).
  3. Provides the ability to define new terms and clarify existing concepts.
  4. Exploratory research is often used to generate formal hypotheses and to develop more precise research problems.
  5. Exploratory research helps to prioritize research.