Download: Ranking Model Adaptation for Domain-Specific Search.

0


Click On Download Button To Get Project Link
*** *** 


Click On Download Button To Get Video Link
****** 


Click On Download Button To Get Documents Link
******

0 comments:

Ranking Model Adaptation for Domain-Specific Search.

0

Ranking Model Adaptation For
Domain-Specific Search

ABSTRACT

With the explosive emergence of vertical search domains, applying the broad-based ranking model directly to different domains is no longer desirable due to domain differences, while building a unique ranking model for each domain is both laborious for labeling data and time-consuming for training models. In this paper, we address these difficulties by proposing a regularization based algorithm called ranking adaptation SVM (RA-SVM), through which we can adapt an existing ranking model to a new domain, so that the amount of labeled data and the training cost is reduced while the performance is still guaranteed. Our algorithm only requires the Prediction from the existing ranking models, rather than their internal representations or the data from auxiliary domains. In addition, we assume that documents similar in the domain-specific feature space should have consistent rankings, and add some constraints to control the margin and slack variables of RA-SVM adaptively. Finally, ranking adaptability measurement is proposed to quantitatively estimate if an existing ranking model can be adapted to a new domain. Experiments performed over Letor and two large scale datasets crawled from a commercial search engine demonstrate the applicabilities of the proposed ranking adaptation algorithms and the ranking adaptability measurement.



EXISTING SYSTEM

The existing broad-based ranking model provides a lot of common information in ranking documents only few training samples are needed to be labeled in the new domain. From the probabilistic perspective, the broad-based ranking model provides a prior knowledge, so that only a small number of labeled samples are sufficient for the target domain ranking model to achieve the same confidence. Hence, to reduce the cost for new verticals, how to adapt the auxiliary ranking models to the new target domain and make full use of their domain-specific features, turns into a pivotal problem for building effective domain-specific ranking models.

PROPOSED SYSTEM

Proposed System focus whether we can adapt ranking models learned for the existing broad-based search or some verticals, to a new domain, so that the amount of labeled data in the target domain is reduced while the performance requirement is still guaranteed, how to adapt the ranking model effectively and efficiently and how to utilize domain-specific features to further boost the model adaptation. The first problem is solved by the proposed rank-ing adaptability measure, which quantitatively estimates whether an existing ranking model can be adapted to the new domain, and predicts the potential performance for the adaptation. We address the second problem from the regularization framework and a ranking adaptation SVM algorithm is proposed. Our algorithm is a blackbox ranking model adaptation, which needs only the predictions from the existing ranking model, rather than the internal representation of the model itself or the data from the auxiliary domains. With the black-box adaptation property, we achieved not only the flexibility but also the efficiency. To resolve the third problem, we assume that documents similar in their domain specific feature space should have consistent rankings.

Advantage:
1.      Model adaptation.
2.      Reducing the labeling cost.
3.      Reducing the computational cost.

MODULE DESCRIPTION:

Number of Modules

After careful analysis the system has been identified to have the following modules:

1.     Ranking Adaptation Module.
2.     Explore Ranking adaptability Module.
3.    Ranking adaptation with domain specific search Module.
4.     Ranking Support Vector Machine Module.

1.Ranking adaptation Module:
Ranking adaptation is closely related to classifier adaptation, which has shown its effectiveness for many learning problems. Ranking adaptation is comparatively more challenging. Unlike classifier adaptation, which mainly deals with binary targets, ranking adaptation desires to adapt the model which is used to predict the rankings for a collection of domains. In ranking the relevance levels between different domains are sometimes different and need to be aligned. we can adapt ranking models learned for the existing broad-based search or some verticals, to a new domain, so that the amount of labeled data in the target domain is reduced while the performance requirement is still guaranteed and how to adapt the ranking model effectively and efficiently .Then how to utilize domain-specific features to further boost the model adaptation.

2.Explore Ranking adaptability Module:
Ranking adaptability measurement by investigating the correlation between two ranking lists of a labeled query in the target domain, i.e., the one predicted by fa and the ground-truth one labeled by human judges. Intuitively, if the two ranking lists have high positive correlation, the auxiliary ranking model fa is coincided with the distribution of the corresponding labeled data, therefore we can believe that it possesses high ranking adaptability towards the target domain, and vice versa. This is because the labeled queries are actually randomly sampled from the target domain for the model adaptation, and can reflect the distribution of the data in the target domain.

3.Ranking adaptation with domain specific search Module:
Data from different domains are also characterized by some domain-specific features, e.g., when we adopt the ranking model learned from the Web page search domain to the image search domain, the image content can provide additional information to facilitate the text based ranking model adaptation. In this section, we discuss how to utilize these domain-specific features, which are usually difficult to translate to textual representations directly, to further boost the performance of the proposed RA-SVM. The basic idea of our method is to assume that documents with similar domain-specific features should be
assigned with similar ranking predictions. We name the above assumption as the consistency assumption, which implies that a robust textual ranking function should perform relevance prediction that is consistent to the domain-specific features.

4.Ranking Support Vector Machines Module:
Ranking Support Vector Machines (Ranking SVM), which is one of the most effective learning to rank algorithms, and is here employed as the basis of our proposed algorithm. the proposed RA-SVM does not need the labeled training samples from the auxiliary domain, but only its ranking model fa. Such a method is more advantageous than data based adaptation, because the training data from auxiliary domain may be missing or unavailable, for the copyright protection or privacy issue, but the ranking model is comparatively easier to obtain and access.


SOFTWARE REQUIREMENTS:
          Operating System                     : Windows
          Technology                               : Java and J2EE
          Web Technologies                     : Html, JavaScript, CSS
           IDE                                          : My Eclipse
           Web Server                              : Tomcat
           Tool kit                                    : Android Phone
           Database                                  : My SQL
           Java Version                             : J2SDK1.5                 
                
HARDWARE REQUIREMENTS:
         Hardware                             :     Pentium
         Speed                                   :     1.1 GHz
         RAM                                   :    1GB
         Hard Disk                           :    20 GB
         Floppy Drive                       :    1.44 MB
         Key Board                          :    Standard Windows Keyboard
         Mouse                                 :    Two or Three Button Mouse
         Monitor                               :    SVGA




0 comments:

Download: Decision Trees for Uncertain Data.

0



Click On Download Button To Get Project Link
*** *** 


Click On Download Button To Get Video Link
****** 


Click On Download Button To Get Reference documents Link
******

0 comments:

Decision Trees for Uncertain Data

0

Decision Trees for Uncertain Data

Abstract:
Traditional decision tree classifiers work with data whose values are known and precise. We extend such classifiers  to handle data with uncertain information. Value uncertainty  arises in many applications during the data collection process. Example  sources of uncertainty include measurement/quantization  errors, data staleness, and multiple repeated measurements. With uncertainty, the value of a data item is often represented  not by one single value, but by multiple values forming a probability distribution. Rather than abstracting uncertain data by statistical derivatives (such as mean and median), we discover that the accuracy of a decision tree classifier can be much improved if the “complete information” of a data item (taking into account the probability density function (pdf)) is utilised.  We extend classical decision tree building algorithms to handle data tuples with uncertain values. Extensive experiments have been conducted that show that the resulting classifiers are more accurate than those using value averages. Since processing pdf’s is computationally more costly than processing single values (e.g., averages), decision tree construction on uncertain data is more CPU demanding than that for certain data. To tackle this problem, we propose a series of pruning techniques that can greatly improve construction efficiency



Existing System:
                In traditional decision-tree classification, a feature (an attribute) of a tuple is either categorical or numerical. For the latter, a precise and definite point value is usually assumed. In many applications, however, data uncertainty is common. The value of a feature/attribute is thus best captured not by a single point value, but by a range of values giving rise to a probability distribution. Although the previous techniques can improve the efficiency of means, they do not consider the spatial relationship among cluster representatives, nor make use of the proximity between groups of uncertain objects to perform pruning in batch. A simple way to handle data uncertainty is to abstract probability distributions by summary statistics such as means and variances. We call this approach Averaging. Another approach is to consider the complete information carried by the probability distributions to build a decision tree. We call this approach Distribution-based.

Proposed System:
We study the problem of constructing decision tree classifiers on data with uncertain numerical attributes. Our goals are (1) to devise an algorithm for building decision trees from uncertain data using the Distribution-based approach; (2) to investigate whether the Distribution-based approach could lead to a higher classification accuracy compared with the Averaging approach; and (3) to establish a theoretical foundation on which pruning techniques are derived that can significantly improve the computational efficiency of the Distribution-based algorithms.

Advantages:
Ø Estimates(i.e. budget, schedule etc .) become more relistic as work progresses, because important issues discoved earlier.
Ø It is more able to cope with the changes that are software development generally entails.
Ø Software engineers can get their hands in and start woring on the core of a project earlier.
Software Requirements:
Ø Operating system           : - Windows XP.
Ø Coding Language : - JAVA,Swing,RMI,J2me(Wireless Toolkit)
Ø Tool Used            : - Eclipse 3.3

Hardware Requirements:
Ø System                 : Pentium IV 2.4 GHz.
Ø Hard Disk             : 250GB.
Ø Monitor                : 15 VGA Colour.
Ø Mouse                  : Logitech.
Ø Ram                      : 2GB

Click Here to Download this Project

0 comments:

Download: A Geometric Approach to Improving Active Packet Loss Measurement

0



Click On Download Button To Get Project Link
*** *** 


Click On Download Button To Get Video Link
****** 


Click On Download Button To Get Reference documents Link
******

0 comments:

A Geometric Approach to Improving Active Packet Loss Measurement

0

A Geometric Approach to Improving Active
Packet Loss Measurement

Abstract

Measurement and estimation of packet loss characteristics are challenging due to the relatively rare occurrence and typically short duration of packet loss episodes. While active probe tools are commonly used to measure packet loss on end-to-end paths, there has been little analysis of the accuracy of these tools. The objective of our study is to understand how to measure packet loss episodes accurately with end-to-end probes. Studies show that the standard Poisson-modulated end-to-end measurement of packet loss accuracy has to be improved. Thus, we introduce a new algorithm for packet loss measurement that is designed to overcome the deficiencies in standard Poisson-based tools. Specifically, our method entails probe experiments that follow a geometric distribution to enable more accurate measurements than standard Poisson probing and other traditional packet loss measurement tools. We also find the transfer rate. We evaluate the capabilities of our methodology experimentally by developing and implementing a prototype tool, called BADABING. The experiments demonstrate the trade-offs between impact on the network and measurement accuracy. BADABING reports loss characteristics are far more accurately than traditional loss measurement tools.

Introduction
Measuring and analyzing network traffic dynamics between end hosts has provided the foundation for the development of many different network protocols and systems. Of particular importance is under-standing packet loss behavior since loss can have a significant impact on the performance of both TCP- and UDP-based applications. Despite efforts of network engineers and operators to limit loss, it will probably never be eliminated due to the intrinsic dynamics and scaling properties of traffic in packet switched network. Network operators have the ability to passively monitor nodes within their network for packet loss on routers using SNMP. End-to-end active measurements using probes provide an equally valuable perspective since they indicate the conditions that application traffic is experiencing on those paths.

Our study involves the empirical evaluation of our new loss measurement methodology. To this end, we developed a one-way active measurement tool called BADABING. BADABING sends fixed-size probes at specified intervals from one measurement host to a collaborating target host. The target system collects the probe packets and reports the loss characteristics after a specified period of time. We also compare BADABING with a standard tool for loss measurement that emits probe packets at Poisson intervals. The results show that our tool reports loss episode estimates much more accurately for the same number of probes. We also show that BADABING estimates converge to the underlying loss episode frequency and duration characteristics.

Modules of the Project:
·        Packet Separation
·        Designing the Queue
·        Packet Receiver
·        User Interface Design
·        Packet Loss Calculation

Module Description

Packet Separation:
          In this module we have to separate the input data into packets. These packets are then sent to the Queue.

Designing the Queue:
          The Queue is designed in order to create the packet loss. The queue receives the packets from the Sender, creates the packet loss and then sends the remaining packets to the Receiver.

Packet Receiver:
          The Packet Receiver is used to receive the packets from the Queue after the packet loss. Then the receiver displays the received packets from the Queue.

User Interface Design:
          In this module we design the user interface for Sender, Queue, Receiver and Result displaying window. These windows are designed in order to display all the processes in this project.

Packet Loss Calculation:
          The calculations to find the packet loss are done in this module. Thus we are developing the tool to find the packet loss.

Existing System:
·        In the Existing traditional packet loss measurement tools, the accuracy of the packet loss measurement has to be improved.

·        Several studies include the use of loss measurements to estimate packet loss, such as Poisson modulated tools which can be quite inaccurate.

Proposed System:
·        The purpose of our study is to understand how to measure end-to-end packet loss characteristics accurately.

·        The goal of our study is to understand how to accurately measure loss characteristics on end-to-end paths with probes.

·        Specifically, our method entails probe experiments that follow a geometric distribution to improve the accuracy of the packet loss measurement.


System Requirements
Hardware:
PROCESSOR        :    PENTIUM IV 2.6 GHz
RAM                      :    512 MB
MONITOR            :    15”
HARD DISK         :    20 GB
CDDRIVE             :    52X
KEYBOARD         :   STANDARD 102 KEYS

Software:
FRONT END                 :    JAVA, SWING
TOOLS USED               :    JFRAME BUILDER
OPERATING SYSTEM:    WINDOWS XP

Conclusion:

          Thus, our project implements a tool named BADABING to find the packet loss accurately by measuring an end-to-end packet loss characteristics such as transfer rate for a packet per second and the probability of the packets being lost in a network, within a set of active probes. Specifically, our method entails probe experiments that follow a geometric distribution to enable more accurate measurements than standard Poisson probing and other traditional packet loss measurement tools. 

0 comments:

Download: Minimizing File Download Time in Stochastic Peer-to-Peer Networks

0


Click On Download Button To Get Project Link
*** *** 


Click On Download Button To Get Video Link
****** 


Click On Download Button To Get Software Link
******

0 comments:

Minimizing File Download Time in Stochastic Peer-to-Peer Networks

1

Minimizing File Download Time in Stochastic Peer-to-Peer Networks


Abstract:
The peer-to-peer (P2P) file-sharing applications are becoming increasingly popular and account for more than 70% of the Internet’s bandwidth usage. Measurement studies show that a typical download of a file can take from minutes up to several hours depending on the level of network congestion or the service capacity fluctuation. In this paper, we consider two major factors that have significant impact on average download time, namely, the spatial heterogeneity of service capacities in different source peers and the temporal fluctuation in service capacity of a single source peer. We point out that the common approach of analyzing the average download time based on average service capacity is fundamentally flawed. We rigorously prove that both spatial heterogeneity and temporal correlations in service capacity increase the average download time in P2P networks and then
analyze a simple, distributed algorithm to effectively remove these negative factors, thus minimizing the average download time. We show through analysis and simulations that it outperforms most of other algorithms currently used in practice under various network configurations.



Existing system:
PEER-TO-PEER (P2P) technology is heavily used for content distribution applications. The early model for content distribution is a centralized one, in which the service provider simply sets up a server and every user downloads files from it. In this type of network architecture (server-client), many users have to compete for limited resources in terms of bottleneck bandwidth or processing power of a single server. As a result, each user may receive very poor performance. From a single user’s perspective, the duration of a download session, or the download time for that individual user is the most often used performance metric.
However, there have been very few results in minimizing the download time for each user in a P2P network. In recent work, the problem of minimizing the download time is formulated as an optimization problem by maximizing the aggregated service capacity over multiple simultaneous active links (parallel connections) under some global constraints. There are two major issues in this approach. One is that global information of the peers in the network is required, which is not practical in real world. The other is that the analysis is based on the averaged quantities, e.g., average capacities of all possible source peers in the network. The approach of using the average service capacity to analyze the average download time has been a common practice in the literature.

Proposed system:
In this paper, we first characterize the relationship between the heterogeneity in service capacity and the average download time for each user, and show that the degree of diversity in service capacities has negative impact on the average download time. After we formally define the download time over a stochastic capacity process, we prove that the correlations in the capacity make the average download time much larger than the commonly accepted value , where is the average capacity of the source peer. It is thus obvious that the average download time will be reduced if there exists a (possibly distributed) algorithm that can efficiently eliminate the negative impact of both the heterogeneity in service capacities over different source peers and the correlations in time of a given source peer.

In practice, most P2P applications try to reduce the download time by minimizing the risk of getting stuck with a ‘bad’ source peer (the connection with small service capacity) by using smaller file sizes and/or having them downloaded over different source peers (e.g., parallel download). In other words, they try to reduce the download time by minimizing the bytes transferred from the source peer with small capacity. However, we show in this paper that this approach cannot effectively remove the negative impact of both the correlations

in the available capacity of a source peer and the heterogeneity in different source peers. This approach may help to reduce average download time in some cases but not always. Rather, a simple and distributed algorithm that limits the amount of time each peer spends on a bad source peer, can minimize the average download time for each user almost in all cases as we will show in our paper. Through extensive simulations, we also verify that the simple download strategy outperforms all other schemes widely used in practice under various network configurations. In particular, both the average download time and the variation in download time of our scheme are smaller than any other scheme when the network is heterogeneous (possibly correlated) and many downloading peers coexist with source peers, as is the case in reality.


Modules:

Parallel Downloading
File is divided into k chunks of equal size and k simultaneous connections are used. Client downloads a file from k peers at a time. Each peer sends a chunk to the client.

Random chunk Based  Downloading     
File is divided into many chunks and user downloads chunks sequentially one at time. Whenever a user completes a chunk from its current source peer, the user randomly selects a new source peer and connects to it to retrieve a new chunk. Switching source peers based on chunk can reduce average download time.

Random Periodic Switching      
File is divided into many chunks and user downloads chunks sequentially one at a time. The client randomly chooses the source peer at each time slot and download the chunks from each peer in the given time slots.


The implementation requires the following resources:

Hardware requirements:
 Pentium processor, 1 GB RAM

Software requirements:
   JDK 5.0, Java Swings

Click Here to Download this Project

1 comments:

Download: Two Techniques For Fast Computation of Constrained Shortest Paths

0


Click On Download Button To Get Project Link
*** *** 


Click On Download Button To Get Video Link
****** 


Click On Download Button To Get Software Link
******

0 comments:

Two Techniques For Fast Computation of Constrained Shortest Paths

0

Two Techniques For Fast Computation of Constrained 
Shortest Paths

Abstract:
         A major obstacle against implementing distributed multimedia applications such as web broadcasting, video teleconferencing and remote diagnosis, is the difficulty of ensuring quality of service (QoS) over the Internet. A fundamental problem that is present in many important network functions such as QoS routing, MPLS path selection and traffic engineering is to find the constrained shortest path that satisfies a set of constraints. For interactive real time traffic, the delay-constrained least-cost path is important.  

Existing system:
         Finding the cheapest (least-cost) feasible path is NP-complete. There has been considerable work in designing heuristic solutions for this problem. Xue and Juttner used the Lagrange relaxation method to approximate the delay-constrained least-cost routing problem. However, there is no theoretical bound on how large the cost of the found path can be. Korkmaz and Krunz used a nonlinear target function to approximate the multi-constrained least-cost path problem. However, no known algorithm can find such a path in polynomial time. Another heuristic algorithm has the same time complexity as Dijkstra’s algorithm. It does not provide a theoretical bound on the property of the returned path, nor provide conditional guarantee in finding a feasible path when one exists. In addition, because the construction of the algorithm ties to a particular destination, it is not suitable for computing constrained paths from one source to all destinations. For this task, it is slower than the algorithms proposed in this paper by two orders of magnitude based on our simulations.

Proposed system:
         A path that satisfies the delay requirement is called a feasible path. Computing constrained shortest paths is fundamental to some important network functions such as QoS routing, MPLS path selection, ATM circuit routing and traffic engineering. The problem is to find the cheapest path that satisfies certain constraints. In particular, finding the cheapest delay-constrained path is critical for real-time data flows such as voice and video calls. Finding the cheapest feasible path is NP-complete. We propose two techniques, randomized discretization and  path delay discretization, which reduce the discretization errors and allow faster algorithms to be designed. The randomized distribution cancels out link errors along a path. The path delay discretization works on the path delays instead of the individual link delays, which eliminates the problem of error accumulation. Based on these techniques, we design fast algorithms to solve the approximation of the constrained shortest path problem.

Modules:
        Topology construction:
In this module we construct a topology with the following steps. The steps involve initializing the number of nodes, giving names to those nodes, initializing the port numbers for a particular node and provision of host name. 

Node information:
In this module we provide the links for the initialized nodes. We also provide cost to the various links. We check there is no multiple links for same set of nodes. Cost specification is given to all nodes.  
Available path:
In this module we get the total number of available paths for the particular topology. The steps involved in this process are, calculating the number of nodes, calculating the no of paths for a particular set of nodes and processing those paths when the particular set of nodes are chosen. This process also calculates the aggregate cost and delay for concurrent paths.

Discretization: 
In this module we apply the discretization algorithms in order to approximate the aggregated delay and cost values for the paths specified.
The steps involved in this process are, getting the aggregate values of the path, applying discretization algorithms to the values, the discretization algorithms are round to ceiling, round to floor, randomized discretization and path delay discretization. This process happens only when a node decides to transmit. 

Message transmission:
In this module the source node chooses the destination and the method of discretization for sending its message in the best path available. Once the client completes its message and sends the message, the client gets the knowledge about the available paths and it also gets the information about the best path and the details regarding the particular path.

The implementation requires following resources:

Hardware requirements:
Pentium processor,  1GB RAM 
Software requirements:
JDK5.0, Java Swings, Microsoft SQL Server.

Click Here to Download this Project

0 comments:

Download: Dynamic Search Algorithm in Unstructured Peer-to-Peer Networks

0

Click On Download Button To Get Project Link
*** *** 


Click On Download Button To Get Video Link
****** 


Click On Download Button To Get Software Link
******

0 comments:

Dynamic Search Algorithm in Unstructured Peer-to-Peer Networks

3

Dynamic Search Algorithm in Unstructured
Peer-to-Peer Networks

Abstract:

In unstructured peer-to-peer networks, each node does not have global information about the whole topology and the location of other nodes. A dynamic property of unstructured P2P networks, capturing global behavior is also difficult. Search algorithms to locate the queried resources and to route the message to the target node. Flooding and RW are two typical examples of blind search algorithms by which query messages are sent to neighbors without any knowledge about the possible locations of the queried resources or any preference for the directions to send. Both algorithms are not suitable to route a message to target. The proposed algorithm is dynamic search (DS), which is a generalization of flooding and RW. Dynamic Search uses knowledge-based search mechanisms. Each node could relay query messages more intelligently to reach the target node.
Existing System:

Designing efficient search algorithms is a key challenge in unstructured peer-to-peer networks. Search algorithms to locate the queried resources and to route the message to the target node.
  • Flooding and random walk (RW) are two typical search algorithms.
  • Flooding searches aggressively and covers the most nodes. Flooding belongs to Best First Search algorithm. It generates a large amount of query messages but would take short term search.
  • RW searches conservatively. RW belongs to Depth First Search algorithm. It only generates a fixed amount of query messages at each hop but would take longer search time.

Disadvantage

  • Flooding is the search cost and not scale.
  • It produces a query messages even when the resource distribution is scarce.
  • RW only visits one node for each hop, the coverage of RW grows linearly with hop counts, which is slow.

Proposed System:
  • We propose the dynamic search (DS) algorithm, which is a generalization of flooding and RW.
  • It resembles flooding for short-term search and RW for long-term search.
  • DS could be further combined with knowledge-based search mechanisms to improve the search performance.
  • Performance of DS based on some performance metrics including the success rate, search time, query hits, query messages, query efficiency, and search efficiency.
  • Numerical results show that DS provides a good tradeoff between search performance and cost.

 Advantage:
  • DS performs about 25 times better than flooding and 58 times better than RW in power-law graphs.
  • DS performs about 186 times better than flooding and 120 times better than RW in bimodal topologies
  • DS reduces search cost, time and improves performance.

Modules and Description
Modules
·         Peer Request
In this module A System has to ask the connection to the superpeer, the peer system has to make a connection with any one superpeer for a communication.

·         Super peer Response
In this module the superpeer has to send the response to the particular peer according to the capacity and request.

·         Upload
In this module, the peer sends it ip/port number corresponding to their file information.

·         Super peer updating
In this module, the superpeer maintains its database and peer request too. So database updating and maintenance is important. If any peer /superpeer ask file means it has to check with its database, if not found means it has to ask its neighboring superpeers until get the files.

·         File request
In this module, the peer asks the (word document file) to the main server i.e(Superpeer). The superpeer check the particular information file in the superpeer database if found send their port number.

·         Updating probability table
In this module, superpeer updates the probability table by using Dynamic Search Algorithm. If a search query for some file delivers to a certain peer successfully, the probability value corresponding to that peer is increased. If the search fails finally, it will decrease the probability value.

·         Response
In this module, peer will get the IP/Port address corresponding to that the file information. The peer will communicate with that ip/port no. for future.

System Requirements

Hardware:
·         PROCESSOR        :          PENTIUM IV 2.6 GHz
·         RAM                                  :          512 MB DD RAM
·         MONITOR                        :          15” COLOR
·         HARD DISK                     :          20 GB
·         FLOPPY DRIVE              :          1.44 MB
·         CDDRIVE                         :          LG 52X
·         KEYBOARD                    :          STANDARD 102 KEYS
·         MOUSE                             :          3 BUTTONS

Software:
·         Front End               :          Java, Swing
·         Back End               :          MS Access
·         Tools Used             :          NetBeans  IDE 6.1
·         Operating System   :         Windows XP

Click Here to Download this Project

3 comments:

Recent Posts