Spammers ordered to pay $1 billion – best lesson for all spammers

Best lesson for all spammers, A recent news article says more than 300 spammers are fined 1 billion $,extract: Robert Kramer, whose company provides e-mail service for about 5,000 subscribers in eastern Iowa, filed suit against 300 spammers after his inbound mail servers received up to 10 million spam e-mails a day in 2000, according to court documents.U.S. District Judge Charles R. Wolle filed default judgments Friday against three of the defendants under the Federal Racketeer Influenced and Corrupt Organizations Act and the Iowa Ongoing Criminal Conduct Act.
AMP Dollar Savings Inc. of Mesa, Arizona, was ordered to pay $720 million and Cash Link Systems Inc. of Miami, Florida, was ordered to pay $360 million. The third company, Florida-based TEI Marketing Group, was ordered to pay $140,000.

Detecting duplicate and near-duplicate files Google patent page here,

Check this page, this is the main patent page of the google’s duplicate content algorithm, Check it out here, Detecting duplicate and near-duplicate files This web page describes research I did for Google 2000 through 2003, although mostly in 2000.

This work resulted in US Patent 6658423, by William Pugh and Monika Henzinger, assigned to Google. The information here does not reflect any information about Google business practices or technology, other than that described in the patent. I have no knowledge as to whether or how Google is currently applying the techniques described in the patent. This information is not approved or santioned by Google, other than by giving me permission to discuss the research;

I did for them that is described in the patent. The patent describes techniques to find near-duplicate documents in a collection. Google is obviously considering applying these techniques to web pages, but they could be applied to other documents as well. It might even be possible to sequences that are not documents (such as DNA sequences), although that raises some questions that aren’t covered here.

I’ll get more information up shortly, but for now: more information on the google’s patent of detecting duplicate files on the web is here,

Is Continuous PageRank Updating a Farce? Nice posting in

A poster in asks whether the pagerank updating is just a comedy act by google, He asks whether continuous pagerank updating is true or false,
here is his posting, I would appreciate it if someone could shed some light on this “continuous
updating” that google is supposed to be doing.
First, let me explain a few
concepts the way I understand them:

1) Google has 8 billion pages in their

2) To calculate pagerank, google must do several “iterations” throught
the data. On the first iteration (of 8 billion pages) google has to see all the
outbound links from ALL of the pages. On the second iteration some pages gain
rank because they have incoming links. But this is not enough, several more
iterations must be completed in order to get a good reading and establish a

3) The computing power required to do numerous iterations across 8 billion
pages must be enourmous

. 4) Google uses “supplemental results” in addition to
the main index, alluding to the idea that PageRank may only be established for
the first 4 billion or so pages, and the rest is just used to “fill in”.

Before google moved to only doing (allegedly visible) updates once per quarter,
there were numerous problems with Google keeping on their monthly schedule.
People would become alarmed.

6) Even before the quarterly updates, Google was
using “Freshbot” to help bring in new data between monthly updates. Please check
me on this: Freshbot results did not have PageRank.

7) We have been told that
even though there is no update to what we see in the little green bar, there is
actually “Continuous PageRank Update”
I find continuous update of PageRank
implausible. In order to get a true calcualtion it requires calcualtions across
the entire dataset of 8 billion pages multiple times. We have already seen signs
that there were issues in the past (missed updates), attempts to remedy the
problem (freshbot), and use of additional measures to hide what is really going
on (quarterly updates). Most likely, we are now in an age of “PageRank Lite”.
And here is the “kicker”: we have this mysterious “sandbox effect” (aka
“March Filter”) that seems to place a penalty on new links and/or new sites.
Could it be a result of Google’s failure to calculate PageRank across the entire
IMHO, Yes!
Quietly, Google has been building more datacenters.
Recently, they opened a new one in Atlanta, GA, but there was no public
announcement. There is not even a sign on the building. If you go up to the
building to the door, there is a voice on the intercom that also doesn’t tell
you that you are at a Google facility (source: Atlanta Journal Constitution).
With the number of datacenters Google has already, the main reason for
adding more is probably not uptime and dependability. Though these things are
important, they certainly have plenty of datacenters, and you rarely see
problems with downtime or server speed. The reason for adding these datacenters
(quietly) must be they need more computing power to calculate PageRank.
believe I have provided many examples to support the idea that continuous
updating of PageRank is indeed a farce. I also feel that this “sandbox effect”
is a result of the inability to do complete calcualtions across the entire
dataset (especially new additions to the data).
I look forward to hearing
what others think.

more here

Google introduces Google Deskbar Plug-in Development Kit

p align=”justify”>This is what google wants to say about this new tool,
What’s a Deskbar plug-in?The Google Deskbar plug-in is a simple extension
mechanism for customizing your Google Deskbar. When you enter a search term and
choose your plug-in from the menu, Deskbar passes your search term to your
plug-in code, which can then return a specific URL to be displayed in a browser
or mini-viewer window, or return text to be displayed directly in the Deskbar’s
text box.
Users need to install the latest version of the Google Deskbar and
version 1.0 or higher of the Microsoft® .NET Framework to use plug-ins.

easiest way for users to get the .NET Framework is to visit the Windows Update
website. It’s a good idea to remind your users to install the latest versions of
the Deskbar and the .NET Framework before they install your plug-in.
more here,

more here

Google to Put Millions of Books on the Internet news story

The Internet search company Google plans to put millions of library books
online and make them searchable.This week, Google announced a project with
the New York Public Library and the libraries of four universities. These are
Stanford, Harvard and the University of Michigan in the United States and Oxford
in England.

Stanford University and the University of Michigan have agreed to let Google copy their full collections. Michigan put some of its seven million books on the Web this week. Its full collection is about six years away. The New York Public Library says it will only provide Google with materials no longer under copyright restrictions. Oxford will offer only books published before the twentieth century. And Harvard University will provide just forty
thousand books at first.

Google’s rapid rise rewarded the open-minded – news story

As Google shares flirt with the $200 mark, investors have to wonder whether many of those buying the stock near that lofty level are money managers who once spurned it at half that price.Google (GOOG) shares have more than doubled since the Web search firm’s Aug. 18 initial public offering, when a modified Dutch auction priced the shares at $85. Google closed up $7.33, or nearly 4 percent, at $193.30 Thursday, after earlier rising to $194.39.]
More news here: —-

Google suggest new feature from google

Google introduces a new feature named google suggest, google suggest is an unique feature of google which does a drop down suggestion of related queries,
This is what google says about google suggest,

Today we launched Google Suggest, a new Labs project that provides
you with search suggestions, in real time, while you type. We’ve found that
Google Suggest not only makes it easier to type in your favorite searches (let’s
face it — we’re all a little lazy), but also gives you a playground to explore
what others are searching about, and learn about things you haven’t dreamt of.
Go ahead, give it a spin. taken down for maintainence

Picasa is a recently acquired photo editing tool from google, helps people support on queries regarding picasa photo editing software,recently the picasa site was under hacker attack, news here
Google’s Picasa Hacked Via Security Hole Google’s Picasa was allegedgly hacked into over the weekend. Zone-H reports that Picasa was victim to a security hole in its forum. Picasa is a software for managing photos, the company has been acquired by Google earlier this year.

The picasa forum now redirects to google groups beta version, , this hacking was more probable because of the security hole in their forum,A guy named Prieni asks about this in the picasa groups, Hey there,what happened to the Picasa forum? I tried to access it as per normaland got the Picasa homepage. I thought “hmm, server must be down or so”and tried again later. Now I tried it again and clicked on the Forumlink and ended up here. Very confusing. So my question is: Can we stillget into the forum? Or will this Google Group be a replacement? If so,what happens to the knowledge base reflected in the ‘old’ forum?A very confused (where are the emoticons?)Prieni

Answer from google,
Hi Prieni,
We had to take the forums down temporarily, and we wanted to give thenew Groups beta a whirl so people would have a place to go. =) Wemay return to the old forum style, or we might keep up the Group, buteither way the old data and knowledgebase should be back up onlinesoon.
I guess we’re back to retro emoticons now… how 1993!
Lorna from google,