Patent Litigation: Putting Assumptions to the Empirical Test

July 28, 2016

Canada ranks a disappointing 16th in WIPO’s latest Global Innovation Index. If Canada wants to rise in those rankings, it must stop basing its intellectual property policies on general presumptions about patent law. Instead, it ought to use empirical evidence and tools to tailor-make policy decisions. For that reason, the Centre for Intellectual Property Policy (CIPP), in a project led by Prof. Gold, has created a comprehensive and detailed database of Canadian patent law cases decided between 2000 and 2015. This database allows us to test some core assumptions regarding the Canadian patent system.

The CIPP database, which notes biographical information along with substantive legal issues, includes all infringement, impeachment and PM(NOC) patent cases from every level of federal court in Canada that dealt with a substantive infringement or validity issue. Because each level of decision on each patent at issue has an individual entry indexed by the patent number in question, single cases contesting either multiple patents or cases related to the same patent have several database entries. The database contains all 464 substantive patent or PM(NOC) decisions issued between 2000 and 2015 (and the lower court decisions resulting in a final decision issued in that period), of which 298 were brought under the PM(NOC) Regulations. In constructing the database, the CIPP was careful to avoid both random and systemic errors through various coding reliability practices, described in the Methodology section below.

One of the great advantages of the CIPP database is the ability to fact-check claims about patent law and litigation in Canada. A quick analysis of the database (we will publish on some findings in academic papers) demonstrates that some commonly held assumptions about Canadian patent law are wrong.

For example, we conducted an analysis of variance to determine whether the judge writing a decision or the lead patentee litigator on a case affected outcomes. We found no evidence (at a 95% confidence level) that either significantly influenced ultimate decisions regarding validity.[1] Our model controlled for factors such as the identity of the patentee, type of patent in question, level of court, lead litigator representing the party challenging the patent, and the patent itself (e.g., drafting, quality, etc.). These results suggest that parties without high-powered and expensive legal teams may not, after all, be at a disadvantage in the outcome of patent litigation.

On the other hand, some presumptions do hold true: of the 20 litigators listed first in 10 or more relevant cases decided on points of substantive law, there are fewer women (two) than men named Andrew (three). This gender imbalance appears throughout the database. One explanation for this may be that female litigators could be more effective negotiators who tend to succeed in settling out of court. Another explanation may be that this is further evidence of systemic bias in the profession: female litigators themselves may avoid intellectual property litigation; firms or clients may be preferentially assigning patent cases to male litigators; or the numbers may be representative of a drop-off in number of senior female litigators more generally. While not answering the question, the CIPP database offers an empirical window through which to analyse the role of gender in the patent field.

The CIPP database also tested Eli Lilly and Company’s assertion, in its NAFTA investor claim, that Canada underwent a significant change in patent law regarding the rules of utility from 2005. According to Eli Lilly, this change led to the invalidation of increased numbers of patents, particularly due to lack of utility. We examined this claim carefully. We first noticed that the number of final patent law decisions per year varied conspicuously, with the greatest number in 2006-2010. Second, the data shows that a simple count of the number of cases in the court held a claim invalid also changes noticeably year to year, with the highest number in 2009 and 2010. What this means is that any simple counting of cases and whether the courts held the claims in dispute valid or invalid is meaningless.

To examine Eli Lilly’s claim, we deployed some basic statistical methods to explore the relationship between time period (in our case, 2000-2005 and 2006-2015), whether the final court held the claim invalid and whether the courts specifically addressed the issue of utility. We focused only on final decisions with respect to a particular patent rather than intermediate court decisions. If Eli Lilly were correct, we would expect significant interaction between the time period and whether the court held the claim invalid, all other things being equal. We would we also anticipate that the data would show that there is a significant interaction between the court considering utility and whether or not the court held the claim valid.

The results are clear: we found no significant interaction between time period and invalidity and none between a court considering utility and a holding of invalidity. More specifically, we can draw the following conclusions based on our analysis:

  • There is no reason to think that a patent claim was less likely to be valid in 2006-2015 than in 2000-2005.[2]
  • That, although courts engaged in a utility analysis more frequently after 2005,[3] there is no reason to believe that this affected rates of invalidity.[4]
  • That the perception that led to Eli Lilly’s assertions is more likely due to an increase in the absolute number of patent cases being litigated and the number of cases addressing utility than to any underlying change in patent law.

The CIPP continues to pursue new research avenues arising from analysis of this database and intends to publish relevant findings in the future. The CIPP will also use this database as an empirically-based resource for other CIPP research projects in the patent law and policy field.

Raw data from the database can be found here.

 

Methodology

Trained research assistants noted biographical information along with the issues of substantive law discussed in each Court’s judgment. We continually assessed and compared their coding to ensure inter-coder reliability and to avoid both random and systematic errors. In particular, the team leader coded cases as examples for coders to follow, and updated the codebook to reflect decisions made. Coders then coded a small number of pilot cases that the leader reviewed and compared. Coders discussed questions and concerns amongst themselves, and with the team leader, and consequently updated the codebook. Coders then proceeded to code their sections, and the team leader continued to update the codebook as new questions arose. Finally, we compared our coding with efforts made by other, non-affiliated organizations (to the extent that we were looking at the same things), as a further check on validity.

We recorded detailed information about each decision in the database to allow for in-depth analysis of specific and particular issues as well as to identify broad trends in recent Canadian patent law jurisprudence. Issues coded include the remedy requested and awarded, general criteria for patentability addressed in the judgment (utility, non-obviousness, patentable subject matter, and novelty), and specific patent law tests (sound prediction, obvious to try, etc.). The database also records whether the court decided in favor of the patentee or the other party on each individual issue, and on the whole.

 

Funding

This project was funded by the PACEOMICS project funded by Genome Canada, Genome Alberta, Genome Quebec, the Canadian Institutes for Health Research, and Alberta Innovates – Health Solutions and the Social Sciences and Humanities Research Council and by the Stem Cell Network Strategic Core Grant: Public Policy & ELSI Research in the Stem Cell Field: Enhancing Translational Stem Cell Research: Innovative Models for Multi-Sectoral Collaboration.


[1] Analysis of variance determines to what degree a given factor explains variation in the model. To do so, the test compares the model including the factor versus excluding the factor. Though each factor will always be found to explain some degree of variation, the question is whether the variation each factor explains is so small that is might as well be due to chance alone. In evaluating the CIPP database, the main effect for the judge writing the majority decision yielded an F ratio of F(83, 43) = 1.2956, p > 0.05, indicating that the effect of the factor was not significant, i.e. the factor did not explain variation in the model more than explained by chance. The same analysis of the patentee’s lawyer yielded an F ratio of F(80, 11) = 1.1569, p > 0.05, again indicating that the effect of the factor was not significant.

[2] Period x Validity, p=0.8415 (not significant).

[3] Period x Whether Utility Addressed, p=0.0004 (significant).

[4] Period x Validity (controlling for utility being addressed), p=0.5379 (not significant).

Updated August 5, 2016 at 10:19.

Comments

Laisser un commentaire