Welcome!

@DXWorldExpo Authors: Zakia Bouachraoui, Elizabeth White, Pat Romanski, Liz McMillan, Yeshim Deniz

Related Topics: @DXWorldExpo, Cognitive Computing , Machine Learning

@DXWorldExpo: Article

Patent Data Quality | @CloudExpo #BigData #Analytics #AI #MachineLearning

Is clean data a pipe dream?

The United States Patent and Trademark Office (USPTO) recently announced an expansion of PatentsView, its visualization tool for US patents. First launched a few years ago, the intent behind the tool was to make 40 years of patent filing data available for free to those interested in examining "the dynamics of inventor patenting activity over time." In spite of being limited to patents (not applications) and with a focus only on the US, it offers some interesting visualizations around locations and citations.

In a blog post last month, USPTO director Michelle Lee said the PatentView tool is based on "the highest-quality patent data available," connecting 40 years' worth of information about inventors, their organizations, and their locations in unprecedented ways. The newly revamped interface presents three user-friendly starting points - relationship, locations, and comparison visualizations - which allow for deeper exploration and detailed views. However, through no fault of their own, the USPTO dataset is rife with spelling errors, doesn't reflect patent reassignments, and doesn't resolve company subsidiaries or acquisitions.

This issue is not unique to the USPTO. Other PTO offices around the world face similar barriers to presenting "clean" data. The first issue, spelling errors, merely reflects the fact that assignee information (among other fields like inventor names) is manually entered and hence prone to error and inconsistency. For example, "International Business Machines" has been spelled 1,200 different ways as a patent assignee over the last two decades in the USPTO data set.

In addition, PTO data doesn't get corrected or updated based on later corrections or patent reassignments. For example, patent US8176440 was originally - and incorrectly - assigned to Silicon Labs. My company, Innography, filed a certificate of correction to update the assignment, yet the USPTO data and PatentsView still don't reflect this. In fact, Innography research shows that nearly 20 percent of US patents are reassigned in their lifetimes, translating into a significant number of company portfolio errors based on this factor alone.

Finally, PTO data also doesn't reflect when companies purchase each other, when there's a spinoff, or when a subsidiary files patents. Microsoft, for example, now owns all LinkedIn's patents, even if the reassignments haven't been processed.

As a result, PTO data falls far short of reflecting reality, where patents and companies are bought and sold every day, and where data-entry errors exist and are corrected. The accuracy of the data is very low when it comes to representing company patent portfolios in the real world.

The Cost of Free Data
The USPTO aims to increase the transparency of patenting and invention processes. But if the quality of data and search results is questionable, what good is it to IP practitioners?

There is rich information available through the patenting process, including economic research, prior-art searching, and discovery of broader trends around filing patterns. However, it was never intended to be used as-is to inform strategic business decisions such as in and out licensing, merger and acquisition activities, or portfolio pruning and maintenance decisions.

It makes sense for PTOs to offer their data for free as a way to engage the community's interest in patenting processes. However, too many lightweight patent analytics tools use this flawed data verbatim to tout their "data quality" to IP professionals.

Many patent analyses start with a company's patent portfolio, such as competitive benchmarking, acquisition analysis, and negotiation preparation. In addition, just about every board-level question about patents requires accurate patent ownership information: "Are we ahead of or behind this competitor?" "What companies should we be worried about in this technology area?"

Poor data quality makes it difficult, if not impossible, to answer those questions accurately. To create the most accurate data set possible, companies must use other sources of information to crosscheck and improve patent data accuracy.

Innography data scientists process more than 2,000 company acquisitions annually, and our user base suggests another 5,000 updates each year. As a result, Innography has created more than 10 million data-correction rules over the last decade, which are continuously updated via machine learning and crowdsourcing.

Company leaders must be able to use patent reports to assess market opportunities and make strategic business decisions. This requires an IP analytics solution that reflects real-world changes, and doesn't rely on poor data quality from outdated PTO assignee information.

More Stories By Tyron Stading

Tyron Stading is president and founder of Innography, and chief data officer for CPA Global. He has been named one of the “World’s Leading IP Strategists" by IAM, and one of National Law Journal's "50 Intellectual Property Trailblazers & Pioneers". Before Innography, Tyron was an IBM worldwide industry solutions manager in the telecommunications and utilities sector, and worked at several start-ups focused on mobile communications and networks security. He has published multiple research papers and filed more than three dozen patents. Tyron has a BS in Computer Science from Stanford University and an MS in Technology Commercialization from The University of Texas.

DXWorldEXPO Digital Transformation Stories
Digital transformation is about embracing digital technologies into a company's culture to better connect with its customers, automate processes, create better tools, enter new markets, etc. Such a transformation requires continuous orchestration across teams and an environment based on open collaboration and daily experiments. In his session at 21st Cloud Expo, Alex Casalboni, Technical (Cloud) Evangelist at Cloud Academy, explored and discussed the most urgent unsolved challenges to achieve fu...
Only Adobe gives everyone - from emerging artists to global brands - everything they need to design and deliver exceptional digital experiences. Adobe Systems Incorporated develops, markets, and supports computer software products and technologies. The Company's products allow users to express and use information across all print and electronic media. The Company's Digital Media segment provides tools and solutions that enable individuals, small and medium businesses and enterprises to cre...
Daniel Jones is CTO of EngineerBetter, helping enterprises deliver value faster. Previously he was an IT consultant, indie video games developer, head of web development in the finance sector, and an award-winning martial artist. Continuous Delivery makes it possible to exploit findings of cognitive psychology and neuroscience to increase the productivity and happiness of our teams.
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
DXWorldEXPO LLC announced today that Nutanix has been named "Platinum Sponsor" of CloudEXPO | DevOpsSUMMIT | DXWorldEXPO New York, which will take place November 12-13, 2018 in New York City. Nutanix makes infrastructure invisible, elevating IT to focus on the applications and services that power their business. The Nutanix Enterprise Cloud Platform blends web-scale engineering and consumer-grade design to natively converge server, storage, virtualization and networking into a resilient, softwar...
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee A...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...