4th Circuit Sides with the Schoolmasters in On-line Plagiarism Detection Service Case

The complaint in Vanderhye v. IParadigms, LLC represented an interesting attempt to attack an on-line, computerized plagiarism detection service by accusing the service of copyright infringement. See Vanderhye v. IParadigms, LLC, 562 F.3d 630 (4th Cir. 2009).

IParadigm operates an on-line plagiarism detection service called Turnitin. Schools require students to submit writing assignments to Turnitin, which are compared to other writings in Turnitin’s database. The database contains other student papers, as well as commercial and academic journal articles. Turnitin supposedly creates a “fingerprint” of the student’s papers by applying various mathematical algorithms. Turnitin then compares this digital fingerprint to the fingerprints of the other works in its database and generates an “Originality Report” which indicates the percentage of the student’s work that appears not to be original.

With permission from participating schools, Turnitin will place the submitted writing assignments into its database, so that they become part of the database used to evaluate the originality of subsequent student papers. The plaintiffs included three students who had submitted their papers to Turnitin for testing and whose papers were then archived in the database. The plaintiffs alleged that Turnitin’s inclusion of their papers in the database constituted copyright infringement.

The Fourth Circuit’s analysis focused on the “fair use” doctrine, which it characterized as “a privilege in others than the owner of the copyright to use the copyrighted material in a reasonable manner without the [copyright holder’s] consent.” 17 U.S.C. § 107 provides that fair use includes “criticism, comment, news reporting, teaching . . . scholarship or research.” The statute provides a four factor test to determine whether the use is fair.

The Court rejected the argument that because IParadigm’s use of the student papers was commercial that this required a finding that its use was unfair. Looking at the four factor test in the statute, the Court instead found that Turnitin’s use of the student papers was transformative, since the papers were being used for a different purpose in the Turnitin database than that for which they were originally prepared. It further found that Turnitin’s use of the papers did not discourage, but rather encouraged creative expression. While Turnitin used the whole of the plaintiff’s works, its use was so transformative that this factor was not decisive. Finally, it found no substantial evidence that Turnitin’s use of the papers in its database would affect the market for the papers.

It is not surprising that the 4th Circuit took the side of the schoolmasters in this case. Of course, any time a Court rules that it is OK for a third party to copy and then use an entire copyrighted work, it will raise eyebrows in some quarters.

On-line Privacy Update: FTC Uses Its Mandate to Expand Reach of Consumer Data Security Laws to Non-Financial Businesses

The Federal Trade Commission (FTC) is increasingly using its broad powers to require businesses to enact privacy measures to protect their customers’ personal data. According to the FTC, all companies must “maintain reasonable and appropriate measures to protect sensitive consumer information.” And the FTC is ready and willing to step in and make them implement such measures — regardless of whether Congress has enacted a specific statute requiring the business to do so.

When most people think about the Federal Trade Commission (FTC), they think about a federal agency that fights monopolies or big consumer frauds. However, the FTC Act, the statute that created the FTC, gave it a very broad mandate: “to prevent persons, partnerships or corporations . . . from using unfair methods of competition in or affecting commerce and unfair or deceptive acts or practices in or affecting commerce.” 15 U.S.C. § 45(a)(2). In the digital media world, throughout the past decade, the FTC has used this vague “unfairness” mandate to require consumer-based businesses to enact data security measures.

There are federal laws that impose data security requirements, such as the Fair Credit Reporting Act (15 U.S.C. § 1681e) and the Gramm-Leach-Bliley Act (15 U.S.C. § 6801 et seq.). These laws apply to financial institutions and credit reporting agencies. However, in its recent enforcement actions, the FTC has begun apply these data security rules to consumer businesses as a whole. (Fn1) According to a June 17, 2009 statement by the FTC to the U.S. House (Fn2), since 2001, the FCT has brought 26 cases against businesses that allegedly failed to protect consumer’s personal information. This includes cases against Microsoft, TJX, LexisNexis, Tower Records, Petco, Reed Elsevier, CVS and Compgeeks.com. None of these companies would commonly be considered financial or credit reporting companies.

The legal authority for the FTC’s actions in each case differed, but in some cases, such as the TJX and Compgeeks.comcases, rested solely on the FTC’s broad mandate to fight “unfairness.” (Fn3) Nevertheless, the terms of the consent orders reached in both cases imposed on TJX and Compgeeks.com the same obligations required of financial companies under the Gramm-Leach-Bliley Act. Both consent orders required the implementation of “a comprehensive information security program that is reasonably designed to protect the security, confidentiality, and integrity of personal information collected from or about consumers.” This is language taken directly from 16 C.F.R. §314.3, the FTC’s rules implementing Gramm-Leach-Bliley.

The FTC complaints in its cases against non-financial businesses “have alleged such practices as the failure to (1) comply with posted privacy policies; (2) take even the most basic steps to protect against common technology threats, (3) dispose of data properly, and (4) take reasonable steps to ensure that they do not share customer data with unauthorized third parties.” According to the FTC, “all of the cases stand for the principle that companies must maintain reasonable and appropriate measures to protect sensitive consumer information.”

Some may wonder about the breadth of the FTC’s powers. However, prior case law had held that the FTC is not limited to merely enforcing specific laws that the Congress has elsewhere enacted. To the contrary, the FTC has the power to declare legal practices as unfair or deceptive, hence making them illegal.

Update on Proposed California Efficiency Standards for TVs: Given the Efficiency of our Market System, Does Consumer Demand for Green Technology Make this Regulation Unnecessary?

Several months ago, the California Energy Commission made big news by announcing that it was considering new energy efficiency standards for televisions. California’s current regulations only apply when a television is in “stand-by” mode and limit and limit such stand-by power usage to 3.0 watts. The current rules also only apply to stand-alone TVs designed to receive broadcast signals, and do not apply to combination TV/DVD or VCR units or computer monitors.

Pacific Gas & ElectricThe proposed rules were based on recommendations from Pacific Gas & Electric, a large California utility. The proposed rules would regulate TVs in both their stand-by and “on” modes and would apply to combination as well as stand-alone TVs. They would not cover computer monitors — a significant exception given the increasing encroachment of computer monitors into the entertainment space. The new rules would require significantly recued power usage: In stand-by mode, power usage would be limited to 1.0 watts. In “on-mode”, power usage limits would be based on screen size — ultimately based on the following formula: [{0.12 watts x the screen area (in square inches)} + 25 watts].

Immediately after the new proposed rules were announced, the major consumer electronics players, such as the Consumer Electronics Association (“CEA”) cried foul. The typical objection was that the new rules would primarily impact larger-sized and more richly-featured LCD and plasma TVs. Because these sets carry higher profit margins, the new rules could have a devastating impact on TV manufacturers and installers.

The California Energy Commission, which planned to move slowly on these regulations, has continued to seek and accept public comment. One such submission, from the CEA, which was recently released by the Commission on June 12, 2009, suggests that the CEA intends to mount a court challenge if the Commission moves forward with the proposed standards.

Inside Rescuecom Corp. v. Google, Inc.: Does Google’s Use of Trademarks to Trigger Advertisements from Competitors Violate Federal Trademark Laws?

The point of the spear in digital media law may be turning from copyright to trademark. As website operators take advantage of recent court decisions, such as Perfect 10, to provide access to third party content with less fear of a copyright suit, content providers as looking to other intellectual property laws to protect their work. The Rescuecom v Google (Fn1) case is an example of such an attack regarding Google’s use of third party trademarks as keywords in internet searches.

Rescuecom filed suit over Google’s use of Rescuecom’s trademark in Google’s search engine. At the time of suit was filed (Fn2), when a Google user entered an entity’s name or trademark, Google provided two types of results. First, it provided a list of links to websites, listed in the order Google’s algorithm’s deemed to be of descending relevance to the user’s search term. (The search results were generally found in a column on the left side of a user’s screen). Search results would typically begin by providing a link to a site owned by the trademark holder, followed by a list of other links that Google’s algorithm’s also deemed relevant to the search term. Second, Google would also provide content-based advertising. These are the “Sponsored Links”, which in my experience show up in a narrower column on the right side of a user’s screen.

Google used a couple of programs to offer these “context-based” links to advertisers: AdWords and Keyword Suggestion Tool. AdWords permitted an advertiser to purchase keywords. The advertiser’s ad would appear in the “Sponsored Links” section on a user screen whenever the purchased keyword was entered as a search term. The advertisers would then pay Google based on the number of times its ad was clicked by users.

Google’s Keyword Suggestion Tool would provide hints to advertisers wishing to purchase keywords as to other useful words that they could purchase. If an advertiser X, a furnace repair company, purchased the keyword “furnace repair”, the tool might also suggest that it purchase the term “Y” — the brand name and trademark of a competing furnace repair company. This would permit advertiser’s X’s ad to appear on Google’s website whenever a user searched for company’s Y’s brand name and trademark.

Rescuecom claimed that through the use of these tools, its competitors’ ads would appear when users were searching for “Rescuecom” on Google. It alleged that as a result, users were deceived and diverted from Rescuecom to these other competing firms. Rescuecom sued, claiming that this practice violated the Lanham Act (federal trademark law).

Based on older 2nd Circuit precedent, the District Court dismissed the suit on Google’s 12(b)(6) motion. (Fn3) However, on April 3, 2009, over a year after it heard the case, the 2nd Circuit reversed.

Six Years After CAN-SPAM: Effective Spam Control Can Require Both Technical and Litigation Solutions

CAN-SPAM (15 U.S.C. § 7701-7713) was enacted in 2003 in response to a national hue and cry over spam. At the time, unsolicited commercial email was estimated to account for half of all electronic mail traffic. According to the Congressional “findings” in the preamble to the Act, the sheer quantity of spam was doing real damage to the internet, creating costs for storage, accessing, reviewing and discarding unwanted emails, and reducing the reliability and usefulness of electronic mail to the recipient. The findings further stated that “The growth in unsolicited commercial mail imposes significant monetary costs on providers of Internet access services, businesses and educational and nonprofit institutions that carry and receive such mail, as there is a finite volume of mail that such providers, businesses, and institutions can handle without further investment in infrastructure.” 15 U.S.C. § 7701(a).
Given these findings, one would think that CAN-SPAM would impose onerous penalties on spammers. Au contraire, mon frere! Instead of “canning” spam, the act became known as the “Yes, You CAN SPAM Act.” In fact, the Act does nothing to outlaw the sending of unsolicited emails per se.
Rather, the sending of unsolicited emails is permitted as long as a few basic rules are followed. In general: (i) the “from” and “subject matter” lines in the header must be accurate, relevant to the subject matter of the email and not misleading. A commercial advertiser must also provide its physical address, and a label must also be present if the email contains adult content; (ii) the email must contain an “opt-out” mechanism, that must be honored within 10 days; and (iii) the email must not be not sent to an email address obtained through “address harvesting” or a “dictionary attack” and must not be sent via automatically created email accounts or a computer network to which the sender has gained access without authorization.
Another important element of CAN-SPAM is that it provides that “any statute, regulation, or rule of a State . . . that expressly regulates the use of electronic mail to send commercial messages” is “superseded” — i.e., preempted. This means that states cannot enact laws that are expressly directed at preventing the sending of unsolicited email messages or at reducing the quantity of email messages that can be sent by a single person. In other words, CAN-SPAM means that the federal government has refused to prevent spamming per se and has declared that the states can’t do it either (unless the spam is accompanied by “falsity or deception”). The effect is that much of the job of preventing spam per se is in private hands.

U.S. SAFE WEB Act Used by FTC to Prevent U.S. Exporter from Pretending to Be U.K.-Based Site

Internet fraud update: Under the FTC Act, the Federal Trade Commission is empowered to prevent businesses from using unfair methods of competition or engaging in unfair or deceptive practices. 15 U.S.C. § 45(a)(2). However, under the version of the FTC Act that existed prior to 2006, the FTC did not have the authority to regulate such practices unless the business involved “commerce” (i.e. sales, shipments) within in the United States. (Fn1) This meant that a business that was solely engaged in the export of goods to countries outside the U.S. was not subject to the FTC’s jurisdiction.

With the rise of the Internet, it became easy for businesses to set up shop in the U.S., but limit their business solely to export to other countries, and thus avoid FTC prosecution for unfair and deceptive trade practices. Because the FTC’s ability to share information about U.S. residents with foreign prosecutors was also limited, this meant that a lot of bad behavior by exporters went unchecked. According to the FTC, this could have made the United States a “haven for fraud.”

In December 2006, Congress passed the U.S. SAFE WEB Act, which amended the FTC Act to fill these loopholes. The U.S. SAFE WEB Act permits the FTC to provide investigative assistance to foreign law enforcement agencies, including conducting investigations to collect information and evidence for these foreign agencies. 15 U.S.C. § 46(j). It also permits the FTC to share investigative materials, such as documents, written reports or answers to questions and transcripts of oral testimony with foreign law enforcement agencies. 15 U.S.C. § 57b-2(6).

In addition, the Act expanded the FTC’s jurisdictional reach to permit it to directly regulate acts involving foreign commerce that: (i) cause or are likely to cause reasonably foreseeable injury within the United States; or (ii) involve material conduct within the United States.

Since the law was signed, the FTC has reported using it in only one prior investigation which was concluded earlier this year. (For a discussion of this case, see our blog post of July 17, 2009). The FTC has recently announced the second use of the U.S. Safe Web Act in its regulatory action against Los Angeles-based Jaivin Karnani and his company Balls of Kryptonite, LLC. (“Karnani”).

According to the FTC’s complaint, Karnani operates two websites, www.bestpricedbrands.co.uk and www.bitesizedeals.co.uk, which sell consumer electronics, such as cameras, video game systems, and computer software exclusively to customers in the United Kingdom. (Fn2) By using the suffixes “co.uk”, stating prices in pounds sterling, referring to the “Royal Mail” and using U.K. addresses, the websites gave U.K. customers the impression that they were located in the U.K. and subject to U.K consumer protection laws.

The complaint also alleged that Karnani’s websites didn’t deliver what they promised. Customers were shipped goods with power chargers that were not compatible with U.K. power systems. Because the goods shipped were not manufactured for the U.K. or E.U. markets, customers did not receive manufacturer warranties. Goods were shipped slowly and customer complaints about this slowness were ignored. Customers were also charged high restocking fees.

Security Experts: Health Data Increasingly Being Sold on Black Market

Consumer health data are increasingly being sold on the black market as health care organizations become popular targets for hackers, NPR’s “all tech considered” reports.

Background

According to Symantec, a security firm, health care companies experienced a 72% increase in cyberattacks between 2013 and 2014. There have been more than 270 public disclosures of large health data breaches — which firms are required to disclose — over the past two years, according to “all tech considered.”

Black Market for Health Data

Meanwhile, health data have increasingly been appearing on the black market, with such information often being more costly to purchase than certain financial data. While stolen credit card numbers tend to be sold for a few dollars or even quarters, a set of Medicare ID numbers for 10 beneficiaries found online by Greg Virign, CEO of the security company RedJack, was being sold for 22 bitcoins, or about $4,700.

Stolen health information available for purchase cannot be found through simple Google searches, and websites offering such data tend to have names that end with .su and .so, as opposed to .com or .org. Some sites for criminal sales feature online rating systems, similar to Yelp, that let the buyer know if the seller is “legit.”

Insufficient Cybersecurity Measures

Meanwhile, security experts say that the cybersecurity measures put in place by health care organizations are not sufficient to adequately combat cyberattacks.

According to “all tech considered,” companies that are subject to HIPAA tend to interpret HIPAA rules loosely.

Jeanie Larson, an expert on health care security, noted that many health care organizations “do not encrypt data within … their own networks.”

In addition, Orion Hindawi — co-founder and chief technical officer at Tanium, a computer network monitoring company — said that some health care organizations are not aware of how large their networks are, including how many computers they have.

The National Healthcare and Public Health Information Sharing and Analysis Center, an industry group Larson is a part of, is pushing for hospitals to invest in cybersecurity to a similar degree as banks. She said, “The financial sector has done a lot with automating and creating fraud detection type technologies, and the health care industry’s just not there” (Shahani, “all tech considered,” NPR, 2/13).

Share With Litigants: Court Orders Social Network Posts Disclosed

A personal injury case in Suffolk County recently became New York’s testing ground for the disclosure of information posted on Facebook and MySpace.  In Romano v. Steelcase Inc. , the defendant demanded access to the private portions of the plaintiff’s social networking sites, including deleted information.  The defendant contended the information would refute plaintiff’s claims about the extent of her injuries.  The plaintiff opposed the defendant’s request on the ground the disclosure would violate her right to privacy.

Justice Jeffrey Arlen Spinner agreed with the defendant and granted the discovery motion.  Finding no New York precedent on this issue, the court cited case law from Colorado and Canada to support its decision.  In rejecting the plaintiff’s privacy claims, Justice Spinner observed that the very purpose of social networking sites is to share “personal information” with others.  Therefore, since the plaintiff “knew that her information may become publicly available, she cannot now claim that she had a reasonable expectation of privacy.”

The court based its decision largely on the fact that the plaintiff voluntarily posted the information she was seeking to protect.  As most social networkers know, however, any of your “Friends” can post information about you (or photos of you) on their pages and there’s not much you can do to stop them.  Even if you convince them to remove the information, the history and deleted files are likely to be available.  It will be interesting to see how courts will treat the disclosure of information posted by third-parties and how privacy arguments will fare in those cases.

Romano v. Steelcase serves as yet another cautionary tale about posting information on the Internet.  Even if you delete a compromising photograph or status update, it could be disclosed to your adversary in litigation and used as evidence against you in a lawsuit. While Facebook members and Internet commenters have spent countless hours and immeasurable bandwidth debating Facebook’s privacy settings, in many ways that entire controversy is a red herring.  Nothing you post on a social networking site is truly private.

– Nicole  Hyland

Right wing cyber attacks on Healthcare.gov website confirmed

Right Wing Attacks on Healthcare.gov Site Confirmed

The House Homeland Security Committee recently posted a video on their YouTube account which highlights part of the committee’s question of Roberta Stempfley. Stempfley was acting assistant secretary of DHS’s Office of Cyber-Security and confirmed 16 attacks on the Affordable Care Act’s (ACA) website in 2013.

One successful attack Stempfley pointed to was designed to deny access to the site. Called a Distributed Denial of Service, or DDoS, this form of attacked is intended to make a network unavailable by repeatedly accessing servers and saturating them with more traffic than the site was designed for.

Right-wingers have distributed the link to the tools needed to perform the attacks. Informationweek, and other sites mentioned the tools had been circulated via social media.

Destroy Obama Care” was the name given to the attack by individuals calling themselves “right wing patriots.”

The message distributed said: “This program displays an alternative page of the ObamaCare website and has no virus, Trojans or cookies. The purpose is to overload the site so as to deny service and possibly crash the system.”

Some news sites have spoken about this attack, and Congress held hearings to discuss the attack. Despite the mainstream media being aware of the problem, they’re ignoring it as they continue to talk about the site not working.

Proposed HIPAA privacy rule on gun background checks open for comments

An advance notice of proposed rulemaking by the Office for Civil Rights Department of the Department of Health and Human Services titled “HIPAA Privacy Rule and the National Instant Criminal Background Check System” was published yesterday in the Federal Register.

Drafted following Executive Actions signed by President Barack Obama in January, the notice claims “Concerns have been raised that, in certain states, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy Rule may be a barrier to States’ reporting the identities of individuals subject to the mental health prohibitor to the NICS.”

Absent from that summary explanation is an identification of who raised those concerns, how widespread they are, and if they reflect a political agenda driven by government officials and special interest groups.

“The Department … is issuing this Advance Notice … to solicit public comments on such barriers to reporting and ways in which these barriers can be addressed,” the notice states. “In particular, we are considering creating an express permission in the HIPAA rules for reporting the relevant information to the NICS by those HIPAA covered entities responsible for involuntary commitments or the formal adjudications that would subject individuals to the mental health prohibitor, or that are otherwise designated by the States to report to the NICS.

“In addition, we are soliciting comments on the best methods to disseminate information on relevant HIPAA policies to State level entities that originate or maintain information that may be reported to NICS,” the summary continues. “Finally, we are soliciting public input on whether there are ways to mitigate any unintended adverse consequences for individuals seeking needed mental health services that may be caused by creating express regulatory permission to report relevant information to NICS.

“The Department will use the information it receives to determine how best to address these issues,” it declares.

Gun Rights Examiner addressed this development on Monday, along with a “clarification” of the Attorney General’s powers “for purposes of permanent import controls” of defense articles and services. That report reminded readers of an ongoing action in New York, where it has been alleged the State Police are cross-referencing medical records with handgun owner permit lists in apparent partnership with the Department of Homeland Security.

The HHS Advance Notice invites public commentary, giving alternative ways for citizens to communicate their concerns, but perhaps the best way is to simply fill out their online form (via “Comment Now” button at Regulations.gov). Note that comments must be submitted on or before June 7. But that is only the first step concerned gun rights advocates must take.

As “Authorized Journalists”/“legitimate media” — who time and again demonstrate they are hardly disinterested players — will hardly be inclined to play government watchdog on this, it’s up to the same gun groups and online activists who mobilized in the face of the Senate gun threat to once more pick up a burden. That means spreading this news and getting others to follow suit, it means keeping up with developments as those with legal knowledge assess likely outcomes, and it means pressuring representatives in the legislature to provide oversight in the interests of rights, of separation of powers, and, just as a telling curiosity, of determining exactly where in the Constitution any of this has been delegated within the purview of Executive powers, that is, where any of this would be even remotely lawful under the federal system established by the Framers.

Originally posted on Examiner