Al Gilman | 2 Nov 15:41 1998
Picon
Picon

Re: Extension path and fuzzy sets

Have we thought about how resource properties play in a web
search scenario?  Here is my Internet printing scenario: I am in
a hotel in a foreign city, and I have discovered I need to
produce a fresh version of my poster for the convention.  I want
to know all my options for printers than can handle the job, and
compare price and delivery for the available choices.  The search
engine does not implement GIS; so I have to do the last part of
the selection process, seeing if my hotel is where they will
deliver to, by hand.

In the search scenario, the ultimate server does not know what
resources its resources are going to be compared against.  And
the ultimate client does not know what servers are going to be
surveyed for resources fitting the demand articulated by the
client.

The search engines make the servers compete to characterize their
resources in whatever way will move the server's resource toward
the head of the list in the response to more queries.  And the
competition between search engines means that they are constantly
looking for better ways to boil 20K choices down to the top 20.

If the MEDFREE architecture doesn't leave room for the search
engines to innovate in protocol (hit-list-reduction intelligence)
then it will be passed over for something that does.  I thought
that is why there is a registry: because the winning resource
discovery protocol (including its underlying information model)
can't be predicted at this time.

How to extract preferences from users is an art that is still
growing.  Cardinal q-factors are not something we can, at this
time, eliminate from the competitive range in this race.  They
don't have to be right all the time to be the winner in this
competition; only be right more often than the cruder assumption.
The critical performance factor for a search engine is: If what
the user wants is available at all, how often do they find it in
the first page of hits that the search engine returns?

In fact, this competitive model almost guarantees that the
specific facts that the search filters care about will change
rapidly -- tracking the shifts in the services markets.  Once all
competitive servers offer something, it no longer is a factor in
the competition.  So the competition among servers is focused on
which servers can implement new stuff faster.  As features become
commoditized, they get rolled into named profiles of de_rigeur
capability minimums.  And the information that distinguishes
competitors is articulated factor by factor.  This is the
information that will flow in negotiation dialogs: not a static
canon of information categories, but the information that is
useful in distinguishing among competing offers _this week_.

Al

A few details are inlined below:

to follow up on what Ted Hardie said:

> On Oct 31,  9:00am, Al Gilman wrote:
> > What is growing is the richness of vocabulary used in the middle
> > of this process.  The start point: a resource is an entity, and
> > the end point: a resolution is a choice, are not changing.  But
> > as we add primitives and structures to how we characterize the
> > entities, we should also allow for the addition of functions that
> > can be used as we extract choices from complex characterizations.
> > It _can_ all be done in Boolean forms, but we are at risk of
> > requiring very _compound_ assertions in the meta-dialog if we
> > don't allow the use of _complex_ operators in the statement of
> > policies.
> 
> I like your description of the problem as one in which we describe
> resources and resolutions as choices, but I would like us to think
> a bit more deeply about the meta-dialog as a process for a moment.

Key disconnect in assumptions:
> In any process of this type, the choice is made with only the information
> which has been explictly provided,        

There are two scenarios I am aware of that are already big
business that do not fit this pattern: Windows explorer, and
internet search engines.  In each of these cases, the information
that is returned is not explicitly provided, but computed on the
fly by executing a query with the information obtained from the
ultimate client.  And servers may make choices involving
information not shared with the client, and clients may make
choices based on information not shared with the server.

>                                ... but how that information is
> determined can vary quite a bit.  If the agent chooses based solely
> on the information provided with/about the resource, you have
> one set of constraints on the meta-dialog.  If the agent requests the
> entity based on its constraints (without knowing anything about the
> media features of the entity), then you have a different set of constraints.
> 
> Is the effectiveness of compound assertions verses complex operators
> relative to the type of meta-dialog?  If so, which favors which?  If not,
> what else about the dialog (latency, device processing and memory
> characteristics, etc) favors one over the other?

Growth in the power of preference expression is necessary so that
the technology supports _a range of options_ as to where decisions
are made, and on what information bases.  Only then can the
market find its own level.  If the technology stands in the way
of the market finding equilibrium, another technology will do
better and win.

Al


Gmane