Types of Studies
Published studies come in many forms, each of which may be interpreted in different perspectives as to their usefulness in a particular context. Some of the forms include:

Clinical studies: Scientific experiments producing new data which is generally collated and/or analyzed to some extent depending on the purpose of the study. Examples:

  • Laboratory testing of samples of beef liver to estimate an average iron concentration;
  • Calibration of a testing method to determine the average BUN in a population of cats when tested using that method.
  • Analytical studies: These draw on the data from multiple clinical studies in order to widen the scope of data. Examples:

  • Comparison of studies on beef, pork, turkey and chicken liver to derive an average iron concentration for liver in general;
  • Comparison of results from different testing methods to determine the average BUN for cats in general; or to compare the consistency of test results using different methods.
  • Derivative studies are based on the reported results of diverse previous studies in an attempt to synthesize "new data" by extrapolation or other forms of interpretation. Example:
  • Assigning values for iron concentration in lamb liver based on data from similar species such as beef and pork or for duck liver from turkey and chicken.
  • Cohort studies involve comparison of data from similar subject populations to determine the existence of a common characteristic or measure such a characteristic quantitatively. Example:
  • Estimating the average percentage of cats with renal failure in a particular age group.
  • Compilations of anecdotal evidence may be considered as cohort studies.
    Reviews aren't actually research but rather interpretative opinions of the meaning of one or more studies. Much information on websites is actually in the form of reviews and should be distinguished from research even though references might be cited. This also often applies to publications by committees.

    Components of Studies
    Some or all of these are inherent or implied in all studies. The degree to which they are included in the report is a governing factor in the ability to evaluate the report as a basis for drawing independent conclusions.
    Purpose of the study -- Examples include:
  • Collection of data for a statistical database or as the basis for research;This data may be the result of any of the above types.
  • Testing or proving a new hypothesis;
  • Testing or elaborating on the hypotheses of previous research;
  • Sources of data -- Significant factors include:
  • Clinical studies: Criteria for choosing the sample population including a control group;
  • Analytical and comparative studies (including cohort):
    Selection and degree of ranking of sources by rules of "Best Evidence" and relevance to the goals of the current study.
  • Size of data set: a study based on 60 samples is preferable to one on 12, all other factors being equivalent.
  • Consistency of data: a study of cats in similar life stages with similar body conditions and states of health may produce more meaningful results than a completely random sampling.
  • Methodology (especially in clinical studies):
  • Selection of test and control groups;
  • Establishment of baseline values prior to initiation of the trial;
  • Methods to narrow the range of variables such as cross-over testing;
  • Laboratory techniques used to obtain and refine data.
  • Analysis:
  • Selection of appropriate analytical models;
  • Cross-checking by the application of different models;
  • Determination of precision of data and results, handling of variances; (This may include information on margin of error, std. deviation, etc. and filtering out anomalies and artifacts.)
  • May include tables and charts of both raw and derived data.
  • Discussion: May touch on methods and analyses (reasons for choices, estimations of degree of confidence); may suggest conclusions to be drawn or directions for further research.

    Results: Generally an extension or synopsis of the discussion (if included). May propose positive conclusions.

    References Cited: useful for corroboration or as additional resources.


    Considerations in the evaluation of individual studies include their source and type, the existence of possible bias, the degree of relevance, logical design, and amount of detail provided in the report.

    Source: Categories could include governmental agencies, university research, scientific journals, organizations attached to a corporate entity, professional/trade organizations or guilds, symposia or other committees, or individuals. Qualification of authors should be investigated if possible..

    Type: See types above, rank according to appropriate Rules of Evidence.

    Derivative Studies should be examined for the methods by which data was assigned, whether it was adopted directly or as an average or whether computational methods were used taking into account differences in the populations from with the comparative data was derived.

    Reviews  should be evaluated on the basis of the authors/source and of any bona fide studies cited.

    Possible bias: Clues may be found by looking at the source, authors and purpose of the study.

    Purpose: simple data collection is less likely to introduce bias if no appropriate data is excluded.

    Interest basis of author(s):

    Need to publish:

  • Committees (e.g. symposia) may have a mandate to publish results in some form. This can result in compromise or hypothetical conclusions when no foundation for conclusive findings can be discovered.
  • Research scientists may be motivated to "prove" a hypothesis because inconclusive results are less likely to be published or result in future funding grants.
  • Commercial interest: an obvious possible source of bias needing no explanation. Studies by commercial entities may contain useful information but it should be evaluated objectively and in context.

    Promotion of an ideology or concept may be driven by a motivation to "prove" a hypothesis rather than to present objective information; conflicting evidence is often discounted, discredited or ignored.

    Relevance: Usefulness of a study in any case may be related to the degree of similarity between that of the study's purpose and data population to the question at hand. For example in trying to determine the effect of an excess of a dietary component on cats, would studies on rats or sheep be applicable and to what degree? The answer to this question may require decisions based on comparisons of the physiology of the various species. Use of diverse data such as this without reasonable confirmation of its appropriateness may leave an area of doubt.

    Detail: The degree to which all of the above components were included in the report
  • Note: In abstracts not including detail such as data sources this information may sometimes be found by locating prior studies cited in the "References" section of the report, each of which should be evaluated on its own merits.
  • Logical basis:
  • Design: Should anticipate and account for possible variations. If intended to determine causality should provide methods to distinguish from mere statistical correlation.
  • Conclusions: should be supported by data derived from the analysis. If this data isn't provided (as in an abstract) the conclusions are subject to question and should be evaluated by other available means such as comparison to existing studies.
  • Discussion/Conclusions:
  • Language used: if there is a single Truth in research it's that a single study does not constitute proof. Statements that "proof" has been established are questionable. More factual expressions use wording such as "demonstrates a correlation", or "would seem to indicate". Reporting inconclusive results may be interpreted as lack of support for a premise but do not disprove it. A truly objective study will often include a statement indicating the need for further research to provide corroboration or resolve inconsistencies or contradictions.
  • Inclusion or omission of relevant contradictory evidence:
    Logical explanation of variances adds to the credibility of conclusions; discounting or discarding them without stating well-founded reasons points to a possible intent to manipulate data or at best dilutes the credibility of any conclusions drawn.
  • References: Valuable in assessing the quality and relevance of evidence on which a study is based and the plausibility of conclusions drawn.

    Comparing similar studies in context:

    Consistency among similar studies:
    If a group of studies have variously contradictory results priority should be attached to those with greatest relevance to the question at hand. Others may be discounted if lacking sufficient relevance.
    Degree of confidence relative to contradictory studies.
    If a group of studies with equal relevance show contradictory results they should be ranked based on individual evaluation of the credibility of each study. If differences are resolved conclusions may be drawn.