Keywords
Search
Engine, Web Intelligence, Multi Agent System, Artificial Intelligence, Knowledge
Based System.
Categories
and Subject Descriptors
D.3.3
[Programming Languages/IDE]: Java and Netbeans – Agent Classes, Databases, UI Classes.
ABSTRACT
Searching of relevant
information for the end user is getting more and more difficult day by day as
the amount of information is increasing that is either available on Internet or
in computer systems. So, in this regards advance searching techniques are evolving
which are coming up with different ideas to provide the solution for these
problems.
However, with the
emerging of new trends in the field of search engines and knowledge base
systems many efforts are made and technologies are evolve. In this research
paper we have discuss a architecture of a knowledge base system which will
allow the extracting of information from a data collected from different
sources. Combinations of different techniques are used to form the structure
which includes the maintaining of profile, activity history and searching
hints. We will be also covering some intelligent techniques to perform these
tasks.
In this paper we have
consider few items to be specific to the structure implementation. The approaches
are discussed which are possible to be implemented which we will discuss in our
Experiment [5]. The approach and technique which is proposed can be suitable
for many situations but here we are approaching for people whose work is
dependant on business updated knowledge and information, which could be the
people from Marketing, Sales and Higher Management, anytime anywhere.
GENERAL TERMS
Artificial Intelligence,
Knowledge Base, Multi Agent System (MAS), Web Intelligence, Searching Techniques,
Search Engines.
1. INTRODUCTION
Artificial Intelligence,
or AI for short, is a combination of
computer science, physiology, and philosophy. AI is a broad topic, consisting of different fields, from
machine vision to expert systems. The element that the fields of AI have in
common is the creation of machines that can "think".
In order to classify machines as "thinking", it is necessary to
define intelligence. To what degree does intelligence consist of, for example,
solving complex problems, or making generalizations and relationships? And what
about perception and comprehension? Research into the areas of learning, of
language, and of sensory perception has aided scientists in building
intelligent machines. One of the most challenging approaches
facing experts is building systems that mimic the behavior of the human brain,
made up of billions of neurons, and arguably the most complex matter Abacus
Consulting in the universe. Perhaps the best way to gauge the intelligence of a
machine is British computer scientist Alan Turing's test. He
stated that a computer would deserve to be called intelligent if it could
deceive a human into believing that it was human.
1.1 Web Intelligence and Agent Systems in WIAS
An International Journal is an official journal of Web Intelligence Consortium (WIC), an
international organization dedicated to promoting collaborative scientific
research and industrial development in the era of Web and Agent Intelligence. WIAS
seeks to collaborate with major societies and international conferences in the
fields. Presently, it has established a tie with the International Conference on Web Intelligence and the International
Conference on Intelligent Agent Technology. WIAS is a peer-reviewed
journal, which publishes 4 issues a year, in both electronic and hard copies. WIAS
aims to achieve a disciplinary balance between Web technology and intelligent
agent technology.
1.1.1 Web Intelligence
Web
intelligence is a combination of web analytics,
which examines how website visitors view and interact with a site’s pages and
features, and business intelligence, which allows a corporation’s management to
use data on customer purchasing patterns, demographics, and demand trends to
make effective strategic decisions. As companies expand their reach into the
global marketplace, the need to Analyze how customers use company websites to
learn about products and make buying decisions is becoming increasingly
critical to survival and ultimate success.
With Web Intelligence, we can make better decisions in less
time, by turning information into actionable insight at the speed of thought.
Web Intelligence is built on our proven, mature BI platform—Business Objects.
This ensures that the deployment meets performance demands and supports
standardization efforts.
From improving corporate decision-making to sharing information
with customers, suppliers, and partners, Web Intelligence delivers self-service
insight to everyone who needs it. Business Objects Web Intelligence empowers
the users with self-service information access and interactivity, while
delivering:
·
Powerful, on-line and offline ad hoc
query and reporting
·
Integrated and trusted analysis for
all users
·
A tool built upon the most complete,
trusted, and agile BI platform
1.1.2 Agent Systems
Agents
are entities that are designed to run routine (user driven) tasks and to
achieve a proposed setting (or goal) within the context of a specific environment.
The difference between an Agent and a traditional software entity is that the
latter just follows its designed functions, procedures or macros to run
deterministic codes. The former incorporates the ability to practice
intelligence by making (autonomous/ semi-autonomous) decisions based on dynamic
runtime situations.
Systems using Software Agents (or Multi-Agent
Systems, MAS) are becoming more popular within the development mainstream
because, as the name suggests, an Agent aims to handle tasks autonomously with
intelligence.
1.1.3 Communication Languages for Agent
Agent Communication
Language (ACL), proposed by the Foundation for Intelligent Physical Agents (FIPA), is a proposed
standard language for agent communications. Knowledge Query and Manipulation Language (KQML) is another
proposed standard.
The most
popular ACLs are:
v
FIPA-ACL (by the Foundation for
Intelligent Physical Agents, a standardization consortium)
v
KQML (Knowledge Query and Manipulation
Language)
Both rely
on speech act theory developed by Searle in 1960 and enhanced by Winograd and
Flores in the 70s and define a set of performatives and their meaning (e.g.
ask-one).
To make
agents understand each other they have to not only speak the same language, but
also have a common ontology. Ontology
is a part of the agent's knowledge base that describes what kind of things an
agent can deal with and how they are related to each other. [11]
1.1.4 Knowledge Query and Manipulation Language (KQML)
The
Knowledge Query and Manipulation Language, or KQML, is a language and protocol
for communication among software agents and knowledge-based systems. It was
developed in the early 1990s part of the DARPA knowledge Sharing Effort, which
was aimed at developing techniques for building large-scale knowledge bases
which are shareable and reusable. While originally conceived of as an interface
to knowledge based systems, it was soon repurposed as an Agent communication
language.
The
KQML message format and protocol can be used to interact with an intelligent system,
either by an application program, or by another intelligent system. Experimental
prototype systems support concurrent engineering, intelligent design,
intelligent planning, and scheduling.
KQML is superseded by FIPA-ACL. [11]
As KQML
has been out-dated, so in our approach we are using the FIPA-ACL as the
communication language between the agents.
1.1.5 Autonomy
Autonomy provides you with a
whole suite of different intelligent
agents to suit a variety of searching
needs. Autonomy is a web-based service, it is a program which needs to be
downloaded and installed on your own PC. It then works with web browser to
provide searching facilities.
Autonomy agents are trained by typing a few words of interest into a search box provided, then the agents loose on the web and they go off to look for relevant documents. These documents are graded according to their perceived relevance to the topics you have chosen.
Here the approach is that
the agent will go to a knowledge base database, where all the information is already
present. And once the agent has finished searching it displays a list of result
found. Which can then review these sites and accept those that appear to be
relevant to your information needs?
Autonomy create a library
for the sites which have accepted and use this information to refine its
searching the next time you ask it to perform a search on that particular
topic. Following the same approach our agent will maintain the history of users
to keep the track of the search result so that they can be used in future.
1.1.6 Knowledgebase Systems
“a knowledge-based
system is a program for extending and/or querying a knowledge base.” [14]
“a knowledge-based system
as a computer system that is programmed to imitate human problem-solving by
means of artificial intelligence and reference to a database of knowledge on a
particular subject.” [15]
Knowledge-based systems are kind of intelligent systems
that are based on the methods and techniques of Artificial Intelligence. Their
core components are the knowledge base and the inference mechanisms. [11]
A knowledge
base used within a company could support decision-making and increase the
intelligence of the business...
1.1.7 Semantic Web
The Semantic Web is an
extension of the current Web that will allow you to find, share, and combine
information more easily. It relies on machine-readable information and metadata
expressed in RDF.
Organization schemes like
ontologies are conceptual; they reflect the ways we think. To convert these
conceptual schemes into a format that a software application can process we
need more concrete representations...[12]
1.1.8 Resource Description Framework (RDF)
W3C standard XML
framework for describing and interchanging metadata. The simple format of
resources, properties, and statements allows RDF to describe robust metadata,
such as ontological structures. As opposed to Topic Maps, RDF is more
decentralized because the XML is usually stored along with the resources. [13]
2. RELATED WORK
A web site is trustworthy if it provides
many pieces of true information, and a piece of information is likely to be
true if it is provided by many trustworthy web sites.
Everyday people retrieve all kinds of
information from the web. For example, when shopping online, people find
product specifications from web sites like Amazon.com or ShopZilla.com. When
looking for interesting DVDs, they get information and read movie reviews on
web sites such as NetFlix.com or IMDB.com. There is no guarantee for the
correctness of information on the web.
We propose a new problem called Veracity,
i.e., conformity to truth, which find true facts from a large amount of
conflicting information on many subjects that, is provided by various web
sites. We design a general framework for the Veracity problem, and invent an
algorithm called Truth Finder, which utilizes the relationships between web
sites and their information. Our experiments show that Truth Finder
successfully finds true facts among conflicting information, and identifies
trustworthy web sites better than the popular search engines.
There is no guarantee for
the correctness of information on the web. Moreover, different web sites often
provide conflicting information on a subject, such as different specifications
for the same product. Even worse, different web sites often provide conflicting
information. [1]
The performance of a Worldwide Web (WWW)
server became a central issue in providing a ubiquitous, reliable, and
efficient information network for real-time ubiquitous-unified Web information
services. For wired Internet using HTML with agents in PCs, and for mobile
Internet using WML or mHTML with mobile agents in mobile devices, the
management of Web server agents and mobile agents becomes more difficult for
real-time Web services.
When browsing information on large Web
sites, users often receive too much irrelevant information. The amount of
knowledge and information in the Web has been growing tremendously and pushing,
in a sense, an already flooded society with knowledge and information; however,
searching, in real-time way, the right information in Web portals has become
more difficult in another sense due not only to the amount of answers, but also
to the inconsistency of those answers provided by various multi-agent portals.
The performance of Web information access
for real-time (precisely, soft real-time instead of hard real-time)
ubiquitous-unified Web information services, using the unified Web information
portal with a cost-effective Web server and intelligent mobile agents. The Web
2.0+ and its applications have been revolutionarily changing and affecting the
world in various ways, especially toward the Knowledge and Information Society.
The Web server is a role center for unified information services; and the
intelligent mobile agents for Web information access have become very important
for a user-group in ubiquitous computing environments. For performance of the
knowledge-based Web information access through the unified information Web
server in a ubiquitous information network.
Via both wired and mobile
Internet, real-time ubiquitous-unified Web information services for information
access should be considered for their convenience, as well as integrity of
consistent information with a real-time requirement in this Information Society
and we considered several aspects about the Web server for a user-group using
real-time Web services. [2]
Different techniques have been exploited
to mine web search query logs for query recommendation, query expansion and
query completion. The importance of real-time and
interactive phrase suggestions while the query is formulated. An interactive
query completion algorithm that suggests the frequent words of the last
incomplete word in the query.
Our observation from the AOL query log
shows that the average number of items in a query is 2.14, while it is smaller
in the University of New Brunswick (UNB) search engine query log which is 1.94.
This is often an insufficient number of items for finding the most relevant web
pages.
Your Eye, the real-time
phrase recommender is introduced that suggests the related frequent phrases to
the incomplete user query. The frequent phrases are extracted from within
previous queries based on a new frequency rate metric suitable for query stream
mining. An advantage of Your Eye compared to Google Suggest, a service powered
by Google for phrase suggestion, is described. The experimental results also
confirm the significant benefit of monitoring phrases instead of queries. The
number of the monitored elements significantly reduces that result in smaller
memory consumption as well as better performance. [3]
Web usage mining can play an important
role in supporting the navigation on the future Web. In fact detection of
common or professional profiles allows browsers and web sites to personalize
the user session and to recommend specific resources to the interested people.
Semantic web approach seems interesting
for this task. This paper a generic approach for profile detection relying on
semantic web technologies. It takes advantages from ontologies, Semantic
annotations on web resources and inference engines. Keywords: profile learning,
ontologies, annotations, semantic web browsing.
This work is carried out in the framework
of the European project Sea life [3]. The objective of Sea life is the design
and development of a semantic Grid browser for the Life Sciences, which will
link the existing Web to the currently emerging eScience infrastructure.
One of the use cases in Sea life project
consists of linking information on biomedical websites to appropriate secondary
knowledge (existing ontologies/terminologies, RSS feeds…). This case study will
demonstrate how to provide.
The user with additional
information on resources He/she is viewing on biomedical websites, using a
semantic mapping to appropriate online portals and databases (called targets).
In this purpose, the Sea life browser must recognize the user profile in order
to select the appropriate ontology and targets. The test scenario of this use
case will be tested on the NELI1 (National Electronic Library of Infection) web
site which is a digital library dedicated to the investigation, treatment,
prevention and control of infectious diseases. [4]
The document
deal with the study of mining the web information and making the interconnectivity
with those words to find information more relatively.
In
analyzing text, there are many situations in which we wish to determine how
similar two short text snippets are. For example, there may be different ways
to describe some concept or individual, such as “United Nations
Secretary-General” and “Kofi Annan”, and we would like to determine that there
is a high degree of semantic similarity
between these two text snippets.
To
address this problem, we would like to have a method for measuring the similarity
between such short text snippets that captures more of the semantic context of
the snippets rather than simply measuring their term-wise similarity. To help
us achieve this goal, we can leverage the large volume of documents on the web
to determine greater context for a short text snippet. By examining documents
that contain the text snippet terms we can discover other contextual terms that
help to provide a greater context for the original snippet and potentially
resolve ambiguity in the use of terms with multiple meanings.
Presently,
we formalize our kernel function for semantic similarity. Let x
represent a short text snippet1.
We compute the query expansion of
x, denoted QE(x),
as follows:
1. Issue
x as a query to a
search engine S.
2. Let
R(x)
be the set of (at most) n retrieved
documents d1;d2;
: : : ;dn
3. Compute
the TFIDF term vector vi for
each document di 2
R(x)
4. Truncate
each vector vi to
its m highest weighted
terms
Given
that we have a means for computing the query expansion for a short text; it is
a simple matter to define the semantic kernel function K
as the inner product of the query expansions
for two text snippets. More formally, given two short text snippets x
and y,
we define the semantic similarity kernel between them as:
K(x;
y) = QE(x)
_QE(y):
We note that K(x;
y) is
a valid kernel function, since it is defined as an inner product with a bounded
norm (given that each query expansion vector has norm 1.0), thus making this
similarity function applicable in any kernel-based machine learning algorithm
(Cristianini & Shawe-Taylor 2000) where (short) text data is being
processed.
Learning Similarity Functions for Record Linkage
Turning
our attention to another important problem in measuring similarities, we
consider the record linkage task. Record linkage is the problem of identifying
when two (or more) references to
an object are describing the same true entity.
Comparing Similarity Functions for Making Recommendations in
On-line Communities
In addition to web search
and comparison shopping, we have also examined the use of similarity measures
in online social networks. Social networking sites such as Orkut (www.orkut.com),
Friendster (www.friendster.com), and others have quickly gained in popularity
as a means for letting users with common interests find and communicate with
each other. [5]
In this
paper, a suggestion of intelligent web information system for minimizing
information gap in government agencies and public institutions delivering
personalized web contents which disadvantaged people can understand and from
which they make the more profit in their economic behaviors.
For
developing the system, we identify disadvantaged people having a lot of total
losses and a high probability for loss per transaction through analyzing
transaction data of all markets. Then we identify the difference of information
gap between disadvantaged people and the other advantaged people, and redesign
the contents of web pages for disadvantaged people to make good a gap of information
and to understand it easily.
Therefore,
we suggested an intelligent web information system in government for help
disadvantaged users make more profit in
their economic behaviors. We defined the important issues for developing
intelligent web information system in government effectively: design of web
contents, personalization, and corresponding to change of market environment. [6]
The
explicit customization of software applications is considered a cumbersome task
that most noncomputer-skilled end-users cannot afford. Thus, the few existing
approaches to this respect have been mainly focused on some domain-dependent
support. Further, the traditional desktop customization process cannot be
applied straightforward to Web environments.
The complexity
of programming and specification languages discourages users even from
attempting software customization. Although most applications do not provide
much support for customization, some of them allow users to adapt partial
aspects of the application to their own needs by selecting predefined options.
Desktop applications are usually complex and implemented in structured
programming languages. This has traditionally made it difficult to provide
easy-to-customize end-user approaches for them.
To
face such a challenge, we leverage Model-Based User Interfaces Design (MBUID)
approaches (Paterno`, 2001) combined with customization techniques (Macı´as and Castells, 2004). The
overall goal is natural development (Berti
et al., 2006), which implies that people should
be able to create or modify applications by working through familiar and
immediately understandable representations to express relevant concepts. In
this respect, our main contribution exploits Model-Based User Interface Design
(Szekely, 1996) and
End-User Development research, combining them by means of an intelligent
environment that can infer meaningful information from the user’s
modifications.
The approach is based on an expert system where the knowledge is
built up progressively, increasing in every user session (i.e. evolutionary
approach). [7]
AI planning is the main
stream method for automatic semantic web service composition (SWSC) research.
However, planning based SWSC method can only return service composition upon
user requirement description and lacks flexibility to deal with environment
change. Deliberate agent architecture, such as BDI agent, is hopeful to make
SWSC more intelligent.
Semantic web
service composition (SWSC) is currently one of the most hyped and addressed
issue in the Service Oriented Computing. Nowadays, most research conducted fall
in the realm of workflow composition or AI planning to build composite web
service.
The
proposal was of an automatic SWSC enabling method based on AgentSpeak Language.
SWSC method alone can only return service composition upon user requirement
description and lacks flexibility to deal with environment change. To
enable an agent, which is written in AgentSpeak language, to perform SWSC
according to composite service description, OWL-S services should be converted
to agent plans.
Core SWSC process goes
with agent’s intention formation mechanism. To agent’s world, the service set
and target service description means plan set and goal event. To services’
world, agent’s intention means service execution sequence. The mapping between
the two worlds is OWLS2APS algorithm. [8]
Brand images and reputation are paramount to corporations,
especially consumer facing companies. It is extremely easy for a brand to
become tarnished or become negatively associated with a social, environmental,
or industry issue. This is true especially with the emergence of new forms of
media, such as blogs, weblog, message boards, and web sites.
The new media allows consumers to spread information freely and at
the speed of thought. By the time publicity has reached the press, it can be
too late to protect the brand - only damage control is possible. Recent pet
food recall and firing of IMUS both started with blog and message board
postings.
COBRA embeds a suite of analytics capabilities to allow effective
brand and reputation monitoring and alerting, which are specifically designed
for blog and web data mining. In addition, grammars. Both web and blogs have
many duplicates.
COBRA also includes techniques for fast and continuous ETL
processing for large amount of semi-structured and unstructured data. This is
important since blogs and web content tend to be particularly dirty, noisy, and
fragmented.
Without
special ETL processing, analytics may be meaningless. Web pages may contain
banners and advertisements that need to be stripped out. Blogs may contain
fragmented sentences and mis-spellings. [9]
Email has
been an efficient and popular communication mechanism as the number of Internet
users increases. Therefore, email management became an important and growing
problem for individuals and organizations because it is prone to misuse.
The blind
posting of unsolicited email messages, known as spam, is an example of the
misuse. Spam is commonly defined as sending of unsolicited bulk email - that
is, email that was not asked for by multiple recipients.
Currently,
much work on spam email filtering has been done using the techniques such as
decision trees, Naïve Bayesian classifiers, neural networks, etc. To address
the problem of growing volumes of unsolicited emails, many different methods
for email filtering are being deployed in many commercial products. We
constructed a framework for efficient email filtering using ontology.
Ontologies allow for machine-understandable semantics of data, so it can be
used in any system. It is important to share the information with each other
for more effective spam filtering.
It is
necessary to build ontology and a framework for efficient email filtering.
Using ontology that is specially designed to filter spam, bunch of unsolicited
bulk email could be filtered out on the system. [10]
3. METHODOLOGY
3.1 Our Approach
Our
system is designed in such a way that the user of different platforms can use
this application which includes (Desktop PCs, PDA, Smart Phone, Notebook and
Tablet PCs). But in our experiment we are using desktop computer as our
interface.
Figure – 3.1 – Architecture
The
Server is responsible for run the database and the main Agents including the
Server, Profile, Knowledge Base and History, whose functionally is discussed
detail [4] Components.
3.2 User Interface & Intelligent Processing
The description of our approach interfaces and
processes is as follows:
·
The
end user access the application from his desktop computer by start the Client
Agent, which connects to the main Server Agent.
·
Once the user starts the application, the application
asks for User ID and Password to login on the server.
·
The
Client
Agent sends the given ID and Password to the main Server Agent which then passes
to Profile
Agent for checking the login credentials.
·
Once
the user is authenticated the Profile Agent fetches the users
profile specified in the database to the Server Agent which send back to the
Client Agent. If the authentication failed an error message is send by the same
route.
·
The
Client Agent next screen shows a Search screen which allows the user to search the
information from the Knowledge Base (KB) database.
·
And
on the same search screen there is another list showing the last 10 histories
of the user searches, which is retrieve from the database by the User
History Agent, as the user login the screen.
·
As
the user start typing on the search line and it reaches to 3 characters the
system contact the KB Agent to retrieve the suggestion and provide on the user
screen to select the relevant word to the user, in order to avoid searching
again and again.
·
The
KB
Agent queries the KB Database to get the desire result
related to the user profile defined which also minimizes the searching options.
·
At
the end of the process the user is provided with the narrow searches from the
Knowledge database.
·
On
the selection of the specific result the detailed information is provided to
the user on the same screen below.
·
Here
is our experiment in which we are only considering textual information that is
being passed in the form of messages, but can be extant to allow viewing of
documents.
·
And
further more here we have not developed any user Knowledge Database update
module, but that can be developed for allowing the updating of knowledge in the
database.
Figure – 3.2 – Knowledge
Extraction Process
4. SYSTEM ARCHITECTURE
Many different techniques
are used to implement the concept for providing the user with there relevant
information in time on field through different medians with less efforts. To
enable this we have used Artificial Intelligence (AI) Agent Based technique, to
come up with the solution. The system architecture is as follows:
Figure – 4.1 – Component Diagram
4.1 Client Agent
The client agent is the
interface between the user and the KB database, from where user queries to
database. The agent login screen once authenticate the search screen provide a
search textbox along with the history of user last 10 searches and quick
suggestion list. Then the result screen of information for the selected search.
4.2 Server Agent
The server agent is the
main authority between the client agents and agents handling the request
respected to there job. All the either related to the authentication, search or
history request are first send to the main server agent which then pass to the
respective agent. And same vice versa from the working agent to the respective
client from where the request has been received.
4.3 Profile Agent
The profile agent is responsible for maintaining
the user profile and information related to the user. And another task that is performed
by this agent is the authentication of users while logging in the system. It
receives the login credentials from the server agent and check it with the
details define in the database and on validation it send the confirmation to
move the user to the search screen and incase of failure it will return a error
on the screen.
4.4 KB Agent
The Knowledge Base agent is responsible for
interacting with the database performing the search request of the users and
returns the relevant search to back to the server agent which will forward to
the client agent.
The KB agent will also work as a quick search keyword
identifier which will return the possible desire search might be required by
the user.
4.5 User History Agent
It is responsible for maintaining the history of
user activities, once a successful result is return to the user and viewed by
user, that specific result stored and when the user will re-login again next
time it will show in the history on the user which is another function of the
history agent that on login of any user the list of last successful search. And
when the user login back on next time that item and previous all items are
shown in the history list. Allow quick access of successful accessed knowledge.
4.6 Database
The database is used to store the entire knowledge
base information which will be requiring by users for the extraction of data.
The data is being updated by the user that will use a interface to update the
information in the database. And the database will also contain the user
profile and history details.
The database contain the textual data with the
keywords define to that specific data, which will allow to segregate the data
with different profile or working.
4.7 Knowledge Updater (User)
The approach which we are carrying is having a
user which will be updating the knowledge in the database with all the relevant
required details to it including keywords, scope of knowledge e.t.c. And further
more the user will be responsible creating and updating the end user profile
that will be accessing the knowledge from the database.
5. EXPERIMENT
To evaluate the approach which we have proposed in our paper, a multi-agent based system has been developed following the same components and structure which is mentioned above in the paper.
In this experiment we try to test the main theme
concept of the structure.
5.1 Populating the History List
On
the logging in the list has been populated with the entire last successful
search result view by the user. The user will be able to directly access that
specific topic from the list. The procedure of populating the list is based on
user profile, on success login User ID, is passed to the history agent which
will return back a list of content for the previous stored success search till
the N records, which is define
as 10 the number of successful result to topic to be return back to the user
screen.
5.2 Extracting of Suggestion Phrase
The
input from the search text-box on the search screen on trying on every
character the request to the KB Agent is based to return a list of suggested
word. If we assume that C1C2C3 are 3
characters as it reach to third character the request for suggestion is send,
and the system check the word in the database and return back a list, but here
there is one more thing that restrict the search which in a domain to become
more efficient and fast that is the user profile, in which the searching is
performed.
Figure – 5.1 – Search Screen (Interface Layout)
5.3 Reviewing the Result List
On
the search request the system return a list of matching results, with title and
description for that topic. And the user selects its required topics for
detailed information.
Figure
– 5.2 – Search Result Screen (Interface Layout)
5.4 Viewing the Desire Search
The
user select the topic and that specific topic information is displayed to the
user and on display the user can marked that result to be in the history list
so that it can be access directly next time by user.
|
|
Figure
– 5.3 –Result Screen (Interface Layout)
6. CONCLUSION AND FUTURE WORK
In this research paper, we have tried to sketch
a model for multi agent system that is basically implementing the complete
concept. All the communication of messages is being performed trough a server
agent which act as a bridge between both ends.
This application or blueprint which we
have developed is to show the possibility of implementing such a structure,
still so many things can be done to improve and add new features to it, which
could allow us to access this application from different platforms and improve
the client interface.
In this overall concept of there are many
techniques from the combination of which we have form a model, in which is
still not using any particular approach to optimized this whole procedure.
Still we optimized the searching techniques and following of information. The
basic concept which we have given through this paper is that with the
combination of approaches which we have followed, we can come up with a smart
solution for any system providing knowledge to users either that is search
engine or specific to any field.
We will
continue our research in this direction to find more loopholes which are
currently present and try to expand the module to expand the scope of this
approach.
7. ACKNOWLEDGMENTS
We
would like to thanks our teacher Mr. Muhammad Khalid Khan, for assisting and
guiding us through out our research, and our classmates for supporting us.
8. REFERENCES
[1] Truth Discovery with Multiple Conflicting Information
Providers on the Web.
Xiaoxin Yin
(UIUC) - xyin1@cs.uiuc.edu
Jiawei Han
(UIUC) - hanj@cs.uiuc.edu
Philip S. Yu
(IBM T. J. Watson Res.
Center ) -
KDD’07, August 12–15, 2007, San Jose , California ,
USA .
Copyright
2007 ACM 9781595936097/07/0008
[2] Web Access Performance With Intelligent Mobile
Agents For Real-Time Ubiquitous-Unified Web Information Services
Yung
Bok Kim1, Yong-Guk Kim1, and Jae-Jo Lee2
1 Sejong
University
KunJa-Dong,
Kwang-Jin-Ku, Seoul , Korea 143-747
{yungbkim,ykim}@sejong.ac.kr
2 Korea
Electrotechnology Research Institute (KERI),
Uiwang-City, Gyeonggi-Do , Korea 437-808
D.-S. Huang,
L. Heutte, and M. Loog (Eds.): ICIC 2007, LNCS 4681, pp. 208–217, 2007.
©
Springer-Verlag Berlin
Heidelberg 2007
[3] On Query Completion in Web Search Engines Based on Query Stream
Mining
M.
Barouni-Ebrahimi and Ali A. Ghorbani
Faculty of
Computer Science, University of New Brunswick , Fredericton ,
Canada
{m.barouni, ghorbani}@unb.ca
0-7695-3026-5/07
$25.00 © 2007 IEEE
DOI 10.1109/WI.2007.78
[4] Recognizing Professional-Activity Groups and Web Usage Mining
for Web Browsing Personalization
Yassine
Mrabet1, Khaled Khelif1, Rose Dieng-Kuntz1
1INRIA Sophia Antipolis, 2004 route des lucioles, 06902
Nice, France
{Yassine.Mrabet, Khaled.Khelif, Rose.Dieng}@sophia.inria.fr
0-7695-3026-5/07
$25.00 © 2007 IEEE
DOI
10.1109/WI.2007.46
[5] Mining the Web to Determine Similarity between Words,
Objects, and Communities
Mehran
Sahami
Google Inc.
[6] Developing An Intelligent Web Information System For
Minimizing Information Gap In Government Agencies And Public Institutions
Tae Hyun Kim
a,*, Gye Hang Hong b,1, Sang Chan
Park a,2
(a)
Department of Industrial Engineering, Korea Advanced Institute of Science and
Technology, 373-1 Guseong-dong, Yuseong-gu,
Daejeon
305-701, Republic
of Korea
(b) Dongbu CNI, #28th Floor, Dongbu Financial
Center, 891-10 Daechi-dong, Gangnam-gu, Seoul 135-523, Republic of Korea
[7] Customization of Web applications through an intelligent
environment exploiting logical interface descriptions
Jose´ A.
Macı´as 1, Fabio Paterno` *
ISTI-CNR, Via G. Moruzzi 1, 56124 Pisa, Italy
Received 4
September 2006; received in revised form 28 June 2007; accepted 8 July 2007
Available online 6 August 2007
[8] Automatic Semantic Web Service Composition via Agent Intention
Execution in agentSpeak
LI Huan - Dep. of Computer
Science and Technology, Xian JiaoTong University Dep. of Computer Science and
Technology, Dongguan University of Technology
QIN Zheng - Dep. of Computer
Science and Technology,Xian JiaoTong University ,
Key Lab for ISS of MOE, Software School , Tsinghua
University
YU Fan - Key Lab for ISS
of MOE, Software School ,
Tsinghua University
Qin Jun - Manchester Business School
YANG Bo - Dep. of Computer
Science and Technology,Xian
JiaoTong University
[9] COBRA – Mining Web for COrporate Brand and Reputation
Analysis
Scott
Spangler, Ying Chen, Larry Proctor, Ana Lelescu, Amit Behal, Bin He,
Thomas D
Griffin, Anna Liu, Brad Wade, Trevor Davis
{yingchen,
, lproctor, alelescu, abehal, binhe, tdg}@us.ibm.com,
{spangles, annaliu,
bwade}@almaden.ibm.com, trevor.davis@uk.ibm.com
[10] Spam Email Classification using an Adaptive Ontology
Seongwook
Youn, Dennis McLeod
Department of
Computer Science, University of Southern California , Los Angeles ,
CA. USA
Email: fsyoun, mcleodg@usc.edu
[11]
ACL, KQML, Knowledge Base
System
Wikipedia, the free encyclopedia
[12]
Semantic
Web
[13]
Resource
Description Framework (RDF)
Reference:
http://www.xml.com/pub/a/2001/01/24/rdf.html
[14]
Free On-line
Dictionary of Computing (FOLDOC)
[15]
The Computer
User High-Tech Dictionary