Types of Studies
Published studies come in many forms, each of which may be
interpreted in different perspectives as to their usefulness in a
particular context. Some of the forms include:
Clinical studies: Scientific experiments
producing new data which is generally collated and/or analyzed to
some extent depending on the purpose of the study. Examples:
Laboratory testing of samples of beef liver to
estimate an average iron concentration;
Analytical studies: These draw on the data from
multiple clinical studies in order to widen the scope of data.
Calibration of a testing method to determine the average BUN in
a population of cats when tested using that method.
Comparison of studies on beef, pork, turkey and
chicken liver to derive an average iron concentration for liver
Derivative studies are based on the reported results
of diverse previous studies in an attempt to synthesize "new data"
by extrapolation or other forms of interpretation. Example:
Comparison of results from different testing methods to
determine the average BUN for cats in general; or to compare the
consistency of test results using different methods.
Assigning values for iron concentration in lamb liver
based on data from similar species such as beef and pork or for
duck liver from turkey and chicken.
Cohort studies involve comparison of data from similar
subject populations to determine the existence of a common
characteristic or measure such a characteristic quantitatively.
Estimating the average percentage of cats with renal
failure in a particular age group.
Compilations of anecdotal evidence may be considered as
Reviews aren't actually research but rather interpretative
opinions of the meaning of one or more studies. Much information
on websites is actually in the form of reviews and should be
distinguished from research even though references might be cited.
This also often applies to publications by committees.
Components of Studies
Some or all of these are inherent or implied in all studies. The degree to which they are included in the report is a governing
factor in the ability to evaluate the report as a basis for drawing independent conclusions.
Purpose of the study -- Examples include:
Collection of data for a statistical database or as
the basis for research;This data may be the result of any of the above types.
Testing or proving a new hypothesis;
Testing or elaborating on the hypotheses of previous research;
Sources of data -- Significant factors include:
Methodology (especially in clinical studies):
Criteria for choosing the test population including
a control population;
Analytical and comparative studies (including cohort):
Selection and degree of ranking of sources by rules
of "Best Evidence" and relevance to the goals of the current
Size of data set: a study based on 60 samples is preferable to
one on 12, all other factors being equivalent.
Consistency of data: a study of cats in similar life stages with
similar body conditions and stages of health may produce more
meaningful results than a completely random sampling.
Selection of test and control groups;
Establishment of baseline values prior to initiation of the
Methods to narrow the range of variables such as cross-over
Laboratory techniques used to obtain and refine data;
Selection of appropriate analytical models;
Discussion: May touch on methods and analyses (reasons for
choices, estimations of degree of confidence); may suggest
conclusions to be drawn or directions for further research.
Cross-checking by the application of different models;
Determination of precision of data and results, handling of
variances; (This may include
information on margin of error, std. deviation, etc. and
filtering out anomalies and artifacts.)
May include tables and charts of both raw and derived data.
Results: Generally an extension or synopsis of the
discussion (if included). May propose positive conclusions.
References Cited: useful for corroboration or as
Considerations in the evaluation of individual studies
Source: Governmental agency, university
research, scientific journal, research organization attached to
a corporate entity or professional/trade organization or guild,
symposium or other committee. Qualification of authors.
Type: See types above, rank according to appropriate
Rules of Evidence.
Derivative Studies should be examined for
the methods by which data was assigned, whether it was adopted
directly or as an average or whether computational methods
were used taking into account differences in the populations
from with the comparative data was derived.
Reviews should be evaluated on the basis of the
authors/source and of any bona fide studies
Possible bias: Clues may be found by looking at the
source, authors and purpose of the study.
Purpose: simple data collection may be less
likely to introduce bias if no appropriate data is excluded.
Relevance: Usefulness of a study in any case may be
related to the degree of similarity between that of the study's
purpose and data population to the question at hand. For example
in trying to determine the effect of an excess of a dietary
component on cats, would studies on rats or sheep be applicable
and to what degree? The answer to this question may require
decisions based on comparisons of the physiology of the various
species. Use of diverse data such as this without reasonable
confirmation of its appropriateness may leave an area of doubt.
Interest basis of author(s):
Need to publish:
Committees (e.g. symposia) may have a mandate to
publish results in some form. This can result in
compromise or hypothetical conclusions when no foundation
for absolute findings can be discovered.
Commercial interest: an obvious possible source of
bias needing no explanation. Studies by commercial entities
may provide useful information but it should be evaluated
objectively and in context.
Research scientists may be motivated to "prove" a
hypothesis because inconclusive results are less likely to
be published or result in future funding grants.
Promotion of an ideology or concept may be driven by a
motivation to "prove" a hypothesis rather than discover
objective information; often conflicting evidence is
discounted, discredited or ignored.
Detail: The degree to which all of the above components
were included in the report
In abstracts not including detail such as data sources this
information may sometimes be found by locating prior studies
cited in the "References" section of the report, each of which
should be evaluated on its on merits.
Design: Should anticipate and account
for possible variations. If intended to
determine causality should provide
methods to distinguish from mere statistical correlation.
Conclusions: should be supported by data
derived from the analysis. If this data isn't provided (as in
an abstract) the conclusions are subject to question and
should be evaluated by other available means such as
comparison to existing studies.
Language used: if there is a single Truth in
research it's that a single study does not constitute
proof. Statements that "proof" has been
established are questionable. More factual expressions use
wording such as "demonstrates a correlation", or "would seem
to indicate". Reporting inconclusive results may be
interpreted as lack of support for a premise but do not
disprove it. An objective study will often include a statement
indicating the need for further research to provide
corroboration or resolve inconsistencies or contradictions. Inclusion or omission of relevant contradictory evidence:
References: Valuable in assessing the quality and
relevance of evidence on which a study is based and the
plausibility of conclusions drawn.
Logical explanation of variances adds to the credibility of
conclusions; discounting or discarding them without stating
well-founded reasons points to a possible intent to manipulate
data or at best dilutes the credibility of any conclusions
Comparing similar studies in context:
Consistency among similar studies:
If a group of studies have variously contradictory
results priority should be attached to those with greatest
relevance to the question at hand. Others may be discarded
based on lack of relevance.
Degree of confidence relative to contradictory studies.
If a group of studies with equal relevance show
contradictory results they should be ranked base on individual
evaluation of the credibility of each study. If differences
are resolved conclusions may be drawn.