Free Essays

The network resource optimization work needed to be done in the framework of Publish/Subscribe (PUBSUB) network.


This paper investigates the ground for the network resource optimization work needed to be done in the framework of Publish/Subscribe (PUBSUB) network [psirp]. This work forms part of the project which will be undertaken in Summer Term (2010-2011) in fulfilment of the Masters Degree (University of Essex). As the project title says “Lightpaths in Publish/Subscribe Internet Model”, the work is more focussed on developing the strategies for optimum utilization of the optical network to reflect data flows and the decisions made at routing layer of the information centric network (ICN). As the project uses two different networking notions i.e. pubsub ICN model and optical networking concept, this paper researches the background for these fields and tries to argument how they are viable candidates for the future internet. It also explains where the proposed work will fit in big picture.

Since 1970’s (ARPANET) [isoc], internet has undergone immense transformations. Internet traffic is growing not just in statistical figures but also in different types of applications it is supporting today e.g. triple/ quadruple play services (voice, video, data). It is being accessed today in different forms i.e. fixed landline connections to WiFi hotspots. Key market players like Cisco predict that data hungry applications like video will remain at the heart of internet usage and will contribute to the majority of the internet revenues [cisco]. Though service providers see these strong earning opportunities, challenges are posed for them to keep customers happy while making optimum use of network resources to serve more customers. Progress in DWDM and EDFA technologies has spurred the desire of having all optical networks [alca][cam]. Number of networking bodies today are working on building efficient total optical solutions, which are gradually making to the market to leverage the very high transport capacity offered by them (in Terabits/s) [ rat].

Though service providers get away with the capacity constrain with the use of optical transport networks (OTN), they are facing problems managing t the IP layer causing possible performance bottlenecks. Blumenthal et al [blue] has thrown light on some of these problems like host centric design i.e. more focus on host to host connectivity than information being delivered. This imposes lot of overhead (maintaining states) on the multicasting services such as news, IPTV, BBC iPlayer [marco2]. It needs more control information which consumes the data bandwidth. The design, by default favours the sender, giving him/her extra power to disseminate the content to desired hosts; this accounts to unnecessary traffic along with the possibility of untrustworthy content being received. Security and mobility were added as top up components [msc]. Attempts are being made to overcome these problems like moving to IP version 6, New Internet Routing Architecture (NIRA), Translating Relaying Internet Architecture integrating Active Directories (TRIAD), Routing on Flat Labels (ROFL) [msc] etc. But all these solutions are still based on underlying IP mattress. Networking experts across the world (Van Jacobson, David Clark, Dirk Trossen) [tow][arg][blue] are hinting for the green field efforts for redesigning the internet by keeping information at the centre of the design and envision this as the internet of the future.

This project focuses on deriving the optimum traffic handling strategies for the optical layer in context of the content centric network (CCN). The work will include building simulations for various network scenarios such as different topologies and data characteristics and verification of those with the test-bed. This paper, chronologically, explains the driving factors and motivation behind this work and also looks at its economical and commercial benefits. Proposal section describes the structure, scope and methodology of the project. Work plan breaks down the project into tasks and shows with the help of Gantt chart how are those placed in time. Finally paper concludes by summarising the outcomes of planning and background study.

Contextual Review

The contextual review illustrates the technical benefits of this project and also covers the other work done/being done in this area. It also mentions economical impact this will have and tries to foresee the market this work may help.

Technical Review

The body of this project is placed on two legs pubsub networking model and optical networking. The project greatly benefits from the earlier work done in these areas. As the work related to ICN is still in research phase, it makes sense to have a look at the technical driving factors after it and to re-view the optical network in context of that. One by one, it tries to elucidate the driving factors behind these fields, their advantages and gain of combining them.

Motivation behind Optical Networking

Due to advancements in DWDM and EDFA, more light wavelengths can be injected into the fibre tremendously increasing the fibre capacity in ranges of terabits [rat]. Research in optical network elements is making them reach longer distances without amplifiers i.e. reducing the network elements and points of failure in the network.

Having multiple wavelengths in the fibre facilitates on demand light path creation (using OADM) allowing effective on the fly bandwidth management [rat][marco1]. However changing the network dynamically is risky task and needs better control. The O-E-O switches allow the demarcation of control and data plane yielding greater speed and flexibility in data forwarding plane which is controlled by but decoupled from the routing layer [marco1]. This concept is similar to that of MPLS but as the current network owners are not ready to shred the already deployed equipments to reap their investments, hence Generic MPLS plays important role where the forwarding tables can be shared by multiple forwarding fabrics. Efforts have been made (Eiji Oki et al) [oki] to engineer the IP and optical networks using GMPLS. Their work is more close to the work this paper tries to present but in framework of CCN. Eiji also talks about concept of traffic grooming which is very much relevant.

Work done by Marco et al [marco1][marco2] experiment an optical switching based on various IP properties e.g. in [marco1]the IP packets heading to identical destinations are clubbed and switched together. In previous work, switching is applied to prolonged, huge IP flows. In Paper [marco2] Optical Flow Switching is explored which switches the flows of the IP traffic by dynamically setting up the links. It is similar to the work this paper proposes where switching decisions will be made by the content and its properties.

Flow switched optical network creates dynamic pass-through circuits at the intermediate nodes such that the data is forwarded from source to destination at the optical layer without any need to go to electrical layer. Further identical flows can be groomed together [marco2]. This feature encourages lot of equipment vendors and market players because of the economic benefit it offers. It takes load off the routing layer i.e. no need to make per hop decisions as in case of today’s IP networks; forwarding can be performed in hardware and hence faster than routing. This allows network operators to carry more customer traffic with the same infrastructural setup.

Motivation behind PUBSUB model

The work this paper presents is targeted for ICN. Number of network research bodies and market players together (PERSUIT, PSIRP, CCNx) [psirp][ccnx][needed] are already working on ICN designs and lot of work is being done in related areas. It does address the problems faced by IP networks and also add some new features of its own as described below.

Information centric approach – The nature of the applications is becoming more demanding not just in size and format of the content (like Video and VoIP) but also in timely delivery. But for service providers managing overload of control information and accessing the domain named services is becoming challenge with IP paradigm. Dirk in his paper [arg] points out that keeping information at the centre of the design truly makes sense. It will be easy if the information is uniquely named and distributed reducing the middleware load and making it easy to access [arg].

Receiver focussed design – Receivers have power to choose the type of information they want to receive by subscribing only to that information. This benefits both end users and network providers; it inherently reduces the spam and possibility of attacks at the user end and results in sensible use of the network infrastructure for providers [msc].

Security and Mobility – Security and mobility will be embedded into the architecture unlike the add-ons in IP suite. With expected growth in mobile markets with 4G and entry of devices like smart phones, embedded mobility solution is a great asset for mobile players for efficient handling of their networks [ill][cisco].

Multicasting and Active Caching – In CCN, the edge network nodes actively monitor the content being accessed and caches the same if it is being accessed too frequently. This helps in reducing the redundant traffic through the core allowing fair utilization of the network [msc]. Multicasting is achieved through the innovative concept of zFilter [ill] which is performed at the forwarding layer. This makes it faster with most of the decisions made off the routing layer, which is attractive feature simplifying the task of network configuration.

Other work in progress – Apart from PSIRP, project like CCNx and 4WARD [ccnx][4ward] also put forward the notion of CCN for future internet. CCNx tries to get the desired content by naming it in levelled manner and 4WARD tries to find the efficient ways to route the data over heterogeneous networks [ill].

There are some strong advantages of combining optical networks with pubsub model e.g. both of them believe in local decision making than configuring end to end paths. Dynamic optical layer can share the pressure at the routing layer for efficient content delivery resulting in fair use of the infrastructure [marco1].

Economical and Commercial Review

Apart from the research bodies and universities, people from the key market players like BT and Ericsson, Xerox [ill][lipsin][ccnx] are also actively involved in the pubsub work, unlike the earlier internet designed by the government bodies [isoc]. This has two advantages; it allows addressing the practical problems faced by these companies right at the design level rather than added as patches later on. When it comes to actual deployment of the researched work, it will have ready acceptance from these industry players and their partners which is a big plus from commercial point of view.

The work directly affects to the companies in content distribution network like Akamai, Limelight Networks [cdn]. Inherent smart multicast and caching abilities open new opportunities to them allowing cost-effective data distribution.

Further Dirk in his paper [driver] comments that metadata databases in the CCN can be used for pricing the specific services in fair manner. This does not need any burden on data bandwidth such as deep inspection or bid packets to differentiate between the streams. Thus CCN may change the way the end user is charged.

Last point worth mentioning is CCN routers consume less electrical energy as compared to the current IP based content distribution strategies like P2P or content distribution networks [green]. Concepts like caching reduce the transit traffic helping in less energy consumption. Also less O-E-O conversions contribute to save the energy consumption at intermediate nodes.


This project falls under PURSUIT [pursuit] which is continuation of the PSIRP project. This project will contribute to the forwarding plane related work of the PUBSUB networks, implemented using O-E-O routers. As PUBSUB uses optical networks in the ground, it is about optical traffic engineering i.e. creating on demand light paths in the network in order to make efficient use of resources. It can be explained with the figurexyz below.


X, Y, Z are OEO routers, inner circle shows the optical layer and outer circle depicts the electrical layer of the network. There is traffic flowing from XaY on wavelength ?1 and also some traffic from XaZ on the same wavelength. After some time due to congestion at node Y, the traffic at Z experiences performance issues.

At this stage decision should be made to cut another wavelength ?2 from XaZ, which is configured as pass-through at node Y so that it does not go to electrical layer and the performance at node Z is restored.

Another important decision needs to be taken is when to shut down this light path i.e. if the traffic at node Y has minimised to earlier levels, so that optical layer has minimum number of wavelength to deal with.

The decision of cutting a new wavelength will be made based on two things,

Size of the content which is going to flow – In CCN, we can know beforehand the amount of data which will flow through the nodes by looking at its metadata. If the data consumes the substantial amount of wavelength capacity then it makes sense to cut a new wavelength.
Quality metrics at the intermediate nodes – Some quality metrics at the intermediate node such as delay might make a decision to cut another wavelength when it goes beyond some threshold.

So the project fully focuses on creating/destroying new wavelengths depending on the quality metrics at the electrical layer or based on the content.

Scope of this project is limited to building simulations and then verification of them using test-bed. The simulations will be performed using proprietary simulator to study the various networking scenarios e.g. for different delay thresholds and topologies. This will yield statistical graphs for number of wavelengths in the network and delay characteristics which can be studied further for optimization. Next step is verification of these results with the help of 3-node test –bed setup as shown in figurexyz. Though the work is limited to 3- node setup, it will serve as a prototype for the further research.

The work done can be gauged on two things,

The statistical results (graphs) generated from the simulations. Expectation is that, it will generate number of curves for delay vs number of wavelengths which will show some sweet spot where both of them are at the optimum level.
Results of the test-bed which will verify the rules of thumbs generated with simulation.

Project Plan

The project work can be broken down in the following tasks and subtasks.

Background Study – This includes numbers of things like,

Understanding concept of PUBSUB and Optical Networking
Literature Review
Project Proposal

Study of a simulator – It is necessary getting acquainted with simulator before the project approaches simulation stage. Hence initial time of the project is assigned for it.

Generating Representative Traffic Model (RTM) – This step involves defining the data models for PUBSUSB network which will be part of metadata. This will help in identifying huge data flows by reading the metadata content.

Identifying Simulation Scenarios – This will decide what type of simulation scenarios to include e.g. networks with different topologies and data stream with different quality metrics and actually running these scenarios to collect the statistics. This can be further broken down in three cases.

Modelling network with huge traffic flows
Modelling network with different delays at intermediate nodes
Modelling network with different delays and different topologies
Modelling network with different types of traffic (if time permits)

Network Optimization – It is concerned with generating rules of thumb for particular traffic or topologies from statistics collected from the simulations.

Test-bed Verification – The rules of thumbs generated from optimization process will be verified for proof of principle using the 3-node test-bed setup.

Report writing and presentation – Last one month of the project is dedicated for report writing and for preparing the presentation.



Internet Society (ISOC) All About The Internet. (Undated). History of the Internet. [Online]. Viewed on : 2 March 2011. Available: (isoc)

Cisco Systems. (2010, June). Cisco Visual Networking Index: Forecast and Methodology, 2009-2014. [Online]. Viewed on: 2 March 2011. Available: (cisco)

Content Centric Networking (CCNx) Source. (Undated). Welcome |Project CCNx. [Online]. Viewed on: 2 March 2011. Available: (ccnx)

(Undated). The FP7 4WARD Project. Viewed on: 2 March 2011. Available: (4ward) (psirp) (pursuit) (cdn)

Alcatel Optical Networks Tutorial (alca)

Arun Somani, Cambridge (cam)

The Rationale of Optical Networking (rat)

Illustrating a Publish-Subscribe Internet Architecture (ill)

Rethinking the Design of the Internet: The End-to-End Arguments vs. the Brave New World (blue)

Academic Dissemination and Exploitation of a Clean-slate Internetworking Architecture: The Publish-Subscribe Internet Routing Paradigm (msc)

Towards a new generation of information-oriented internetworking architectures (tow)

Greening the Internet with Content-Centric Networking (green)

Arguments for an Information-Centric Internetworking Architecture (arg)

Not Paying the Truck Driver: Differentiated Pricing for the Future Internet (driver)

LIPSIN: Line Speed Publish/Subscribe Inter-Networking (lipsin)

Optical IP Switching for dynamic traffic engineering in next-generation optical networks (marco1)

Optical IP Switching: A Flow-Based Approach to Distributed Cross-Layer Provisioning (marco2)

Dynamic Multilayer Routing Schemes in GMPLS-Based IP+Optical Networks (oki)

Free Essays

Modelling of underwater acoustic communication network


The research on underwater acoustic networks (UAN) is gaining attention due to their important applications for military and commercial purposes. Underwater communication applications mostly involve long term monitoring of selected ocean areas. The traditional approach for ocean bottom monitoring is to deploy underwater sensors, record the data and recover the instruments. However this approach creates long delays in receiving the recorded information and if a failure occurs before the recovery, all the data is lost. The ideal solution for these applications is to establish real time communication between underwater instruments and a communication center within a network configuration .

Basic underwater networks (UAN) are formed by establishing a two way acoustic link between various instruments such as autonomous underwater vehicles and sensors. the network is then connected to backbone such as internet, through the RF link. This configuration creates an interactive environment where scientists can extract real time data from multiple underwater instruments. The data is transferred to the control station when it is available hence data loss is prevented until a failure occurs[2]. Underwater networks can also be used to increase the operation range of underwater vehicles. The feasible range of underwater vehicles is limited by the acoustic range of a single modem which varies from 10 to 90 km [2].However due to high cost involved in underwater devices it is necessary that the deployed network be highly reliable so as to avoid failure of monitoring mission due to failure of single or multiple devices.

From the communication point of view, underwater environment is much different from its terrestrial counterpart. Consequently, the research of UAN’s becomes different and exhibits unique features. It is because:

The attenuation of acoustic signals increase with frequency and range resulting in extremely small feasible band.
The propagation speed of acoustic wave is 1500m/sec which is several orders of magnitude lower than radio waves [3], thus giving large propagation delays.
The channel characteristics vary with time and highly depend on transmitter and receiver. The fluctuating nature of the channel causes distortion in the signals.
Due to the variable acoustic environment UAN differ in many aspects such as ranging from network topologies to protocols of all layers compared with the ground one.

The network topology directly influences network capacity of the underwater channel which is severely limited. It becomes important to organize network topology in such a way that no congestion occurs or in other words designing of network topology with single point of failure should be avoided. Underwater networks can be composed of entirely fixed nodes, entirely mobile nodes or a mixture of both. The network topology typically need to be ad hoc in nature either because communicating nodes are moving or basic acoustic conditions change with time. There are three basic network topologies that can be used to interconnect network nodes [3].

(1)Centralized topologyIn this topology, each network host is connected to central station known as hub of the network. The network is connected to a backbone at this central station. Deep under water acoustic networks (UAN) has been tested using this configuration where a surface buoy with both an acoustic and RF modem acts as the hub and controls the communication to and from ocean bottom instruments. This topology is considered the easiest topology to design and implement .The advantage of this topology is the simplicity of adding additional nodes. A major disadvantage of this topology is the presence of single failure point.If the hub fails, the entire network goes down. Further, the network cannot cover large areas due to limited range of single modem.

(2)Distributed of point to point topology This topology provides point to point links between every node of the network. There is just one hop from a node to any other node, hence routing is not necessary. The major disadvantage of this configuration is that excessive power is needed for communicating with widely spread nodes. Further, near far problem [4] is much prominent in which a node can block signals of the neighboring node.

(3)Multihop topology In this topology nodes are involved to send a message from source node to destination. Hence routing is needed which is handled by intelligent algorithms that can adapt to changing conditions.Multihop networks can cover large areas since the range of the network is now determined by number of nodes rather than the range of the modem. The only problem with this topology is that of packet delay as the number of hops increase


Due to scarce bandwidth, long propagation delay and high error rate, underwater nodes in a UAN have to share the available resources. The three basic access techniques are

(1) Frequency division multiple access(FDMA)FDMA divides the bandwidth into several subbands and assigns one of them to a particular user. The band is used by this user only till it is released.FDMA may not be efficient in underwater environment. The available bandwidth is extremely limited .By dividing the band into smaller sub bands , the coherence bandwidth of the transmission channel can be larger than FDMA subchannel.This will result in severe fading .another issue is that mechanism could in inefficient in bursty[ 4] traffic because bandwidth is fixed for each subband and cannot be adjusted [5] .

(2) Time division multiple access(TDMA) In this multiple access scheme time frame is divided into slots and each slot is assigned one individual user. Each user transmits in the assigned slot. The advantage of TDMA is power saving which is extremely critical in underwater environment. Since each user transmit only in its assigned slot, transmitter could be turned off during the idle period to save energy.TDMA is also flexible in the way that data rate of users can be increased on demand. The same hardware can be used to transmit and no extra hardware is needed e.g. to add another time slot for a user.

The disadvantage of TDMA is that it has larger overload than FDMA which means guard times are included in order to avoid collisions from neighbors. Further, TDMA requires strict time synchronization. The significant difference in propagation delays cause large idle times resulting in decrease in throughput.

(3) Code division multiple access CDMA This multiple access method is the widely deployed scheme based on spread spectrum. It allows users to transmit signalsall the time with all available bandwidth. Signals from users are distinguished by means of spreading code. This code is orthogonal to the spreading codes used by other users. There are two spreading techniques namely direct sequence spread spectrum(DS) and frequency hopping spread spectrum(FH).In the former case the spread code is multiplied directly(linear modulation) in order to spread the original bits while in latter case, the carrier frequency of a user is changed according to the pattern of the spread code.

Following are the main advantages of CDMA

(a)It has higher efficiency and throughput than FDMA and CDMA [3].

(b)CDMA is very effective against jamming, multipath interference and any other interference that appears deterministic [6].

(c)Switching from signal to signal for a transmitter or receiver can be easily done by changing the spread codes. Thus CDMA is flexible.

(d)In DS system, fine time resolution of spreading codes provides the possibility of coherently combining multipath arrivals using rake receiver. The rake receiver identifies three strongest multipath signals and combines them to one powerful signal. If the resolvable multipath components fade independently, it is possible to extract a time diversity gain present in the channel [5].

(e)Increased communication security.

Due to above mentioned reasons, CDMA and spread spectrum signaling appear to be promising multiple access method for shallow water acoustic networks.


A lot of media access control (MAC) protocols for underwater networks have been explored such as ALOHA, slotted ALOHA and CSMA. The most significant protocols among underwater networks seem to be CSMA/CA .

Carrier sense media access with collision avoidance (CSMA/CA)

The scarce resources of channel can be utilized much better if users sense the channel before transmitting a packet. This protocol uses two signaling packets called request to send (RTS) and clear to send (CTS). When a device intends to send a packet, it first senses whether another station is already transmitting (carrier sense). If no transmission is sensed, the device will issue RTS signal which contains the length of the message to be sent. If the recipient station senses that the medium is clear, it sends a clear to signal (CTS) which also contains the length of the message to be transmitted. As soon as the station wishing to transmit receives the CTS signal, it sends the actual data packet to its intended recipient. If the transmitting station does not receive the CTS signal in reply, it begins the RTS procedure. The controlling signal CTS should be heard by all the nodes within the range of the receiver node which in turn means that this protocol relies on the symmetry of the channel. It becomes essential to send CTS from a higher level to ensure that all the nodes within the range can hear it. This protocol can be used as a basis of media access protocol for underwater networks. It provides information for power control algorithms as nodes learn the minimum power level needed for reliable communication by trial and error.


Single hop transmission becomes inefficient if the range of the network becomes large. In that case multihop transmission is needed to relay the information from source to destination. It has also been proved that in underwater networks multihop transmission is more efficient in terms of power consumption [7].

The network layer is responsible for routing packets from source to destination when multihop is needed. There are two methods of routing namely virtual circuit routing and packet switched routing.

In virtual circuit routing, a communication path is decided before the data transmission takes place. Based on resource optimizing algorithm, the system decides which route to follow. For the whole transmission time session between two communicating entities is dedicated and exclusive, and released only when the session terminates.

In packet switching, the packets are sent towards the destination irrespective of each other. There is no pre determined path and each packet has to find its own route. Each node is involved in routing the packets in order to determine the next hop of the packet.

Underwater networks may have entirely fixed nodes (ocean bottom sensors) or completely mobile nodes (autonomous underwater vehicles).These instrumentstemporarily form a network without the aid on any pre existing infrastructure.These are called ad hoc networks [3].The main problem in ad hoc networks is obtaining most recent individual link state in the network, so as to decide best route for the packets. However, in case communication medium is highly variable such as shallow water acoustic channel, the number of routing updates can be very high. Some of the routing protocols that can be used in underwater acoustic networks are as follows [3]:

(1)DSDV (Destination sequenced distance vector) In this routing algorithm every node maintains a routing table of all available destinations, number of hops to reach the destination and the sequence number assigned by the destination node. The sequence number is used to distinguish stale routes from new routes and thus avoids the formation of loops. If a node receives new information, it uses the latest sequence number .If the sequence number is same as the one already in the table, the route with better metric will be used. The nodes periodically transmit their routing tables to their neighbors. If a node detects any route to the destination broken, then its hop number is set to infinity and its sequence number is increased. The disadvantage of DSDV is that its routing tables need to be updated regularly which wastes batter and small bandwidth even when the network is idle. further, if topology of network changes, a new sequence number needs to be added hence DSDV is not suitable for highly dynamic networks.

(2)DSR (Dynamic source routing) Instead on relying on the routing table at intermediate node, DSR makes use of source routing. The sender knows the complete hop by hop route to destination with these routes stored in a route cache. The route for each packet is included in its header. The node which receives the packet checks the header for the next hop and forwards packet. Route discovery works by flooding the network with route requests (RREQ) packets. On receiving the RREQ each node rebroadcasts it, unless it is the destination or it has route to the destination in its route cache. This protocol works well in static and low mobility environments.

(3)AODV (Ad hoc on demand distance vector This protocol establishes route to the destination only on demand and does not require nodes to maintain routing tables of destinations that are not actively used. Routes are discovered and maintained by route requests (RREQ), route replies (RREP) and route errors (RERR).

AODV uses destination sequence numbers on route updates which guarantees loop free path and gives the view of several fresh routes. The advantage of AODV is that it creates no extra traffic for communication along existing links by lowering the number of messages, thus conserving capacity of the network. Also , distance vector routing is simple and does not require much calculation. However time to establish connection and initial establishment of a route is much longer than the other approaches.


The interest in underwater networks and the consequent research has exponentially grown in recent years. Network Simulation and testing of underwater acoustic networks is relatively a new area, however there already exists some effort in this area. The authors of [2] compare the performances of DSDV, DSR and AODV with regards to following parameters

(1)Total throughputIt is the average rate of successful message delivery over a communication channel and is expressed as bits per second. Throughput is the very important metric in underwater acoustics because of very limited bandwidth.

Fig 1: Total throughput for DSDV, DSR and AODV routing protocols [2]

The above figure shows the total throughput plotted against the offered load. It can easily be concluded that AODV has the best performance and maximum throughput, whereas DSR routing protocol is the worst.

(1) Total packet delivery ratio It is the ratio between the number of packets sent out by the source and the number of packets correctly received by the corresponding destination. It is calculated by averaging time passed from the time a data packet is generated and when the packet is received by the destination.

Fig 2: Total delivered packets for DSDV, DSR and AODV routing protocols [2]

The above figure total delivered packets versus the offered load. The plot indicates that the DSR and DSDV have best performance when the offered load is below 0.1 pkt/sec, and AODV protocol is worse when the offered load is 0.1 pkt/sec. however when the offered load increases AODV protocol gives the best performance compared to DSR and DSDV.

(1)Average end to end delay It is the delay in the arrival of packet calculated by averaging the time that passes the time a data packet is generated to when it arrives at its final destination. Figure (3) shows the plot of average end to end delay versus the load offered. The minimum end to end delay is achieved by AODV protocol .DSR is the worst routing protocol having an average delay of 115 sec.In general minimum delay is achieved by all routing protocols when the offered load is small.

Fig 3: Total average end to end delay for DSR, DSDV and AODV protocols. [2]

In the above mentioned system it can be concluded that AODV routing protocolachieves maximum throughput and has best performance compared with DSDV and DSR routing protocols. It also gives minimum end to end delay when compared with other protocols. The best performance was achieved when offered load was decreased resulting in increase in packet delivery rate and decrease in average end to end delay.


The past decades has significantly advanced underwater networking research. Static protocols such as TDMA or CDMA and dynamic protocols like CSMA/CD have been used in distributed and centralized topologies. DSR, AODV and other lightweight protocols have been investigated for underwater use. Efficient multihop and ad hoc packet routing protocols are promising research areas in future. Time is fast approaching for IEEE 802.11 style standardization for underwater network protocols which will lead to interoperable communication devices that can be used in a plug and play fashion similar to terrestrial wireless systems.


[1] J.Catipovic, D.Brady, S.Etchemendy, “Development of. Underwater Acoustic Modems and Networks”oceanography ,vol 6,pp112-119,mar 1993

[2] Omar O. Aldawib”A Review of current Routing Protocols for AdHoc Underwater Acoustic Networks” pp 431-433 aug 2008

[3]E.M.Sozer,M.Stojanovic and J.G.Proakis,”Underwater Acoustic Networks,” IEEE J. OceanicEng.,vol.25,no. 1, Jan. 2000, pp. 72-83.

[4] K .Pahlavan and A.H Levesque,wireless information networks,New York,wiley,1995

[5] T. S. Rappaport, “Wireless Communications”, Englewood Cliffs,NJ: Prentice Hall, 1996.

[6] A. J. Viterbi, “CDMA, Principles of Spread Spectrum Communication”, Reading, MA: Addison-Wesley, May, 1997

[7] M. Stojanavic, “On the Relationship Between Capacity and Distance in an Underwater Acoustic Communication Channel”, ACM WUWNet ’06, pp. 41 – 47, Los Angeles, CA, USA, Sept., 2006

Free Essays

Wireless sensor network and its Applications


Wireless sensor networks use sensing techniques to gather information about a phenomenon and react to the events in a specified environment by the means of Sensors. These small, inexpensive, smart devices, which are connected through wireless links, provide unique opportunities for controlling and monitoring environments. Technically, a sensor translates the information from the physical world into signals and prepares them for analysis and processing.

The terms, Wireless Nodes, Sensor nodes and motes can be used interchangeably in different contexts. Here we refer to them as motes. Motes are typically produced in large quantities and are usually densely distributed in the network. Their size(or their components size) varies from macroscopic- scale to microscopic or even sometimes nanoscopic-scale. “Micro-sensors with on-board processing and wireless interfaces can be utilized to study and monitor a variety of phenomena and environments at close proximity.”

A mote is consisted of four major components:

Processing Unit: For data processing and “managing the procedures that make the motes collaborate with other nodes to carry out the assigned sensing tasks.”
Sensing Unit: To sense the physical world and convert the data into digital signal ready for processing.
Transceiver Unit: To provide the connection of nodes in the network.
Power Unit: To supply energy for the device components.

Based on the application, motes may have some additional components such as location finding system, mobilizer and power generator.
These components should be put together in a way to fit in a small size module, be adaptive to different environments and consume as little power as possible.

The components of a mote

Figure is a representation of data acquisition about a phenomena (Process) in the real world which can be sensed by a sensor. The sensed signal needs often needs some changing in order to be processed (Signal Conditioning). For example in order to make the signal range appropriate for conversion some changes on signal magnitude is needed through signal amplification. Unwanted noise can also be removed through this stage.

The analog signal is then transformed to digital signal by using ADC and is ready for further processing or storage.

Data acquisition and actuation


Wireless sensor networks can be used in places where wired systems cannot be deployed (e.g., a remote or dangerous area). It can also be used in commercial products to improve the performance or quality of them or provide convenience for their users.

Sensor can sense many different variables such as: temperature, humidity, pressure and movement. They can sense an environment continuously or they can be event driven and sense an event when it occurs.

Wireless sensor networks can support a wide range of applications.

Battlefield surveillance, Bridge and highway monitoring, Earthquake detection, Habitat Monitoring, Health care, Industrial monitoring and control, Tracking wildfires, Traffic flow and surveillance, Video surveillance and Weather monitoring are few examples of its applications.

Military Applications

One of the first applications of sensor network was military sensing. WSN could be used for monitoring the critical equipment, vehicle or weapons to make sure they are in a proper condition. Terrains, paths and roads could be monitored to sense the presence of opposing forces. They also can be used to enhance the targeting system of ammunitions. Human teams can be replaced by sensor networks in places affected by biological and chemical warfare or incidents in order to perform nuclear reconnaissance and prevent humans to be exposed to radiations.

Traffic surveillance

Traffic surveillance is another example of WSN applications. Sensors are placed in predefined places to gather data and send it via wireless links to data centres for further processing. This data can be beneficial for statistical purposes such as vehicle count per day, the number of cars per lane and the average speed of vehicles. It can also be useful for real time applications such as traffic flow monitoring, incident reporting and managing the traffic lights in order to prevent heavy traffics.

Real-time traffic flow control

Medical Applications:

Wireless sensor network benefits are being explored by many hospitals and medical centres around the world. As it can be seen in Figure sensors can be implanted in patient body or connected to him in order to collect information about his vital signs such as heart beat, blood pressure and oxygen level in blood. This information can be transferred patient’s medical record for future examinations and long-term inspections. It also can be displayed in real-time or alert physicians based on the sensor program in case of any sudden change in under-care patient condition.

Realization of these various applications requires wireless ad hoc networking techniques. However they are not suitably designed for special features and applications of sensor networks.

WSN vs. Mobile Adhoc Netoworks

[12] Although there are lots of similarities between Mobile ad networks (MANET) and WSN for instance their lack of network infrastructure, use of multi-hop routing and wireless channel, there are some major differences to point out.

Nodes in MANET are designed for human interaction such as laptop and PDAs, whereas in WSN motes are usually left unattended in remote or dangerous locations with the least possible interactions.
In WSN “the topology of the network may change dynamically” due to node failure. It can happen because sometimes motes in some specific areas may be damaged and fail. In some network topologies motes have a sleep/awake cycle in order to save energy, so the topology needs to change when a mote is not available at a specific time.
In WSN unlike MANETs the source of energy is limited and the nodes are sometimes left unattended in places where there is no access to them to change or recharge their batteries. “The range of communications is typically within a few meters and at low rates (some kilobits per second); there are typically a few kilobytes of memory and the processor may operate at speeds of only some megahertz.”
Mote design and communication aspect of WSN is totally application dependent and changes based on different application requirements.
Motes in some wireless sensor applications remain sleep for the most of their lifetime and transfer their information in a timely basis in order to save energy. So the traffic flow in the network is almost infrequent and delay time is usually higher than MANET networks.

Overview of 802.15.4

1- -> IEEE 802.15 WPAN™ Task Group 4 (TG4)

2- a ZigBee Alliance the Official Website

3- -> EE Times: The global electronics engineering community

The IEEE 802.15.4 and the Zigbee alliance have been working together in order to improve WSN efficiency, safety, security, reliability and convenience of this technology. IEEE 802.15.4 focuses on physical layer and MAC layer at the 868MHz (Europe), 915MHz (US) and 2.4GHz (worldwide) ISM bands whereas Zigbee alliances work on higher level protocols.

“The IEEE 802.15 was chartered to investigate a low data rate solution with multi-month to multi-year battery life and very low complexity. It is operating in an unlicensed, international frequency band.”

“Some of the characteristics of IEEE 802.15.4 include:

Data rates of 250 kbps, 40 kbps, and 20 kbps
CSMA-CA(Carrier sense multiple access with collision avoidance) channel access
Fully handshaked protocol for transfer reliability
Power management to ensure low power consumption
16 channels in the 2.4GHz ISM band, 10 channels in the 915MHz I and one channel in the 868MHz band.”

“The ZigBee specification enhances the IEEE 802.15.4 standard by adding network and security layers and an application framework. From this foundation, Alliance developed standards, technically referred to as public application profiles, can be used to create a multi-vendor interoperable solutions. For custom application where interoperability is not required, manufacturers can create their own manufacturer specific profiles.”

[2]Some of the characteristics of ZigBee include:

Global operation in the 2.4GHz frequency band according to IEEE 802.15.4
Regional operation in the 915Mhz (Americas) and 868Mhz (Europe).
Frequency agile solution operating over 16 channels in the 2.4GHz frequency
Incorporates power saving mechanisms for all device classes

[802]IEEE 802.15.4 standard defines PHY (physical layer) and MAC (medium access control) layer for the purpose of low data rate wireless communications which consume very low power.

Physical Layer

Some of the main characteristics of the PHY are the processes of sensing the environment, turning on/off the transceiver, estimating the receiver power/link quality indication and transmitting/receiving the information between two nodes. It finally sends the result of channel assessment to the MAC layer. The PHY is responsible for providing two services:

PHY Data Service: “Enables the transmission and reception of PHY protocol data units (PPDUs) across the physical radio channel.
PHY management service

There are different frequency bands and data rates which a device should be able to operate with which are summarized in Table ?.

Table – Frequency bands and data rates

Mac Layer

MAC layer provides access to the physical radio channel to transmit MAC frames.

Some of the main characteristics of MAC sublayer are network beaconing, frame validation, Guarantees time slots (GTS) and handles node associations.

The MAC layer is responsible for providing two services:

MAC Data Service: “Enables the transmission and reception of MAC protocol data units (MPDUs) across the PHY data service.”
MAC Management Service

IEEE 802.15.4 MAC can work with both beacon enabled and non-beacon models. When it is on non-beacon model it is a simple CSMA/CA protocol but in beacon enable mode it works with super frame structure, shown in Fig. The frame starts with a Beacon which is sent by coordinator periodically. The frame also contains inactive period and active period. During the inactive period the device switches to low power mode and communicate with others during active period. The Beacon Interval is calculated based different attributes. In Active period the portion is divided into 16 slots which consist of three parts: Connection Access Period (CAP), Collision Free Period (CFP) (the GTS sections within it is for specific nodes) and the beacon.

Fig Superframe structure

Network Topologies

ZigBee supports 3 types of topologies: Star, Mesh(peer-to-peer) and Cluster tree as shown in Fig .

Star topology:
In this topology the communication is only between the single central controller called Personal Area Network (PAN) coordinator and other devices in the network which is mostly suitable for small networks such as single hop networks. A PAN coordinator usually has a unique identifier which is only used by this specific coordinator and allows different star networks to operate separately in the same area.

Mesh topology:
This topology also has a PAN coordinator like Star topology but with the difference of having communication not only between coordinator and devices but between devices as well when they are in the range of one another. Although it makes the network structure more complex, but as a result of allowing multi-hop routing it is suitable for large networks. It also can be an adhoc network with self-healing and self-organizing characteristics.

Cluster tree topology:

Cluster tree network is a form of peer-to-peer network. One coordinator operates as a PAN coordinator which has the responsibility of defining Cluster Heads (CH). The CH is a kind of Full Function Device (FFD) which can act as a coordinator. Each Reduced Function Device (RFD) then can selects its CH and joins that cluster. This kind of structure has a great impact on energy saving in the network which will be discussed later.

Fig Topology Model

Energy Conservation and measurement:

[24]A wireless sensor network is created with hundreds or thousands of sensor motes, distributed independently in a remote area with the responsibility of sensing the environment, processing information and communicating with other motes in the network for years with a limited source of energy provided by a small battery which is almost impossible to be changed or recharged during motes life time. Therefore the concept of energy consumption management in the network has become one of the most important aspects of wireless sensor network design and implementation. The power saving approach has affected the mote design, power management strategies, communication and routing protocols of the WSN.

Generally energy saving methods are divided in two major categories:

Energy saving at Mote level; aims to selects the most energy efficient components of the device and trade off unnecessary operations in order to save energy based on the application requirements.
Energy saving at Communication level; selecting the most efficient communication methods and protocols to conserve energy at this level.
Power saving at mote level:

The first step in saving energy at mote level is to find out where the energy is consumed in the mote. As it was mentioned before, a mote consists of 4 components: Processing Unit, Sensor, Transceiver and a Power supply to provide energy for the mentioned parts.

Based on the experimental measurements in [40] data transmission is more energy consuming that data processing. Passive sensors such as temperature sensors on the other hand consume a small amount of power compared to other components which is almost usually negligible. Table shows a power model of a Mica2 mote in different states.

Free Essays

Network Design


I have been asked to research and compare two of the most widely used internet security protocols, Transport Layer Security (TLS) and Secure Shell (SSH). In this report I shall research both protocols and then compare the two listing similarities and differences in how they operate as security protocols. I shall examine the features of both giving advantages and disadvantages, examples will be given for both security protocols and any infrastructure needs.

As per instruction I will be using varied sources for my research including books, magazines and the internet, as with any report I shall reference all of my sources of information.

Transport Layer Security

Today the need for network security is of uppermost importance. We would all like to think that data is transmitted securely, but what if it wasn’t. Credit card crime for example would be a lot easier if there was no network security. This is one of many reasons why we need network security, and to achieve this we need protocols to secure the end to end transmission of data.

An earlier protocol that was widely used in the early 1990’s this was the Secure Socket Layer protocol (SSL). SSL was developed by Netscape but had some security flaws and used a weak algorithm and did not encrypt all of the information. Three versions of SSL where developed by Netscape and after the third the Internet Engineering Task Force (IETF) were called in to develop an Internet standard protocol. This protocol was called the Transport Layer Security (TLS) protocol. The main goal was to supply a means to allow secure connections for networks including the internet.

How it works

The Transport Layer Security protocol uses complex algorithms to encrypt information as it is sent over the network. The protocol comprises of two main layers the Transport Layer Security Record and the Handshake Protocol.

TLS Handshake Protocol

The TLS Handshake protocol is used to; in principle agree a secret between the two applications before any data is sent. This protocol works above the TLS Record protocol and sends the secrets in the order in which they have to be sent. The most important feature here is that no data is sent in securing connection, the first bit sent is a start bit to the whole process and only when secure connection achieved is data sent over the network.

TLS Record Protocol

The Transport Layer Security Record encrypts the data using cryptography and uses a unique key for connection which is received from the Handshake protocol. The TLS Record protocol may be used with or without encryption. The data which has been encrypted is then sent down to the Transmission Control (TCP) layer for transport. The record also adds a Message Authentication Code (MAC) to the outward data and confirms using the MAC. I have used the image below to show how this is achieved.

Where TLS is used

The Transport Layer Security protocol is normally used, above any of the Transport Layer protocols. So the TLS protocol operates at Open Systems Interconnection (OSI) level 4, where it joins itself to other transport layer protocols, for example Hypertext Protocol( HTTP) and File Transfer Protocol (FTP) although its main partner is Transmission Control Protocol( TCP).

Main area of use would be the internet in applications that need end to end security. This data is usually carried by HTTP and with TLS becomes HTTPS. TLS is therefore used to secure connections with e-commerce sites. VoIP also uses TLS to secure its data transmissions.” TLS and SSL are most widely recognized as the protocols that provide secure HTTP (HTTPS) for Internet transactions between Web browsers and Web servers.” (Microsoft, 2011)

The Transport Layer Security protocol is also used in setting up Virtual Private Networks (VPN), where end to end security is a must but again is used alongside other protocols.

How Secure Is It?

Secure Shell

The Secure Shell (SSH) is used for safe remote access between clients through an untrusted network. SSH is widely used software in network security. The need for such protocols is paramount in today’s technology based world. In the modern office for example employees may wish to transfer files to their home computer for completion, this would be an unwise option if it wasn’t for security protocols. A man in the middle attack could take place by listening on the network for traffic and picking up all your company secrets or personal ones.

How it works

The Secure Shell develops a channel for executing a shell on a remote machine. The channel has encryption at both ends of the connection. The most important aspects of SSH is that it authenticates the connection and encrypts the data it also ensures that the data sent is the data received.


TLS protocol. (2011, 03 23). Retrieved March 23, 2011, from wikipedia:

Microsoft. (2011, March 23). What is TLS. Retrieved March 23, 2011, from Microsoft TechNet:

Free Essays

Analysis of Network Management (FCAPS) protocol and its role in building flexible and improved networks

Introduction :

This plan is about Configuration of a new network management system which is connected to a between different platforms. Branches are connected over Fixed IP to enhance performance of network. Configure Management on all routers so client workstations can obtain automatically IP address from routers, and NMS and FIXED IP is configured to hide the internal IP address from ISP. So in this proposal we have configured and structured all our scenarios which are explained in details.

Here is a brief introduction of Fixed IP and analysis of protocol which is required to build a flexible improved and flexible network so it’s a summary of FCAPS, Management Tools, SNMP and NMS which are main components used in my scenario.

Assignment is going to present the configuration of management system for wireless when we accessing from different places. I preferred to set a Fixed IP for network. It displays a synchronization levels for accessing points. It is done from WAN connection.

The ISP maintain a pool of modems for their dial-in customers. This is managed by some form of computer (usually a dedicated one) which controls data flow from the modem pool to a backbone or dedicated line router. This setup may be referred to as a port server, as it ‘serves’ access to the network. Billing and usage information is usually collected here as well. Many computers connected to the Internet host part of the DNS database and the software that allows others to access it. These computers are known as DNS servers. No DNS server contains the entire database; If a DNS server does not contain the domain name requested by another computer, the DNS server re-directs the requesting computer to another DNS server.

Placing a Fixed IP :

A lot of routers let you to produce unbelievable call a “static lease”. in essence, this secure a MAC sermon to (bodily chat to of your composite documentation, theoretically no more than one of its variety to your documentation) to a hard to please IP address. This has many advantages.

First, you don’t have to confusion with ANY of your computer system settings ever. The router is forever departing to dole out the correct equivalent deal with to that system, the standard DHCP setting will employment in good health.Computer ask for one lecture to, and until the end of time get agreed the exact.
Thi s is not probable to provide numerous the whole story, every router is dissimilar, I can let know you with the intention of if you make use of IPCop as your entryway

( extremely Good Idea), it is as straightforward as click one of conations in the “in progress go-ahead lease” list, and imperative “produce unchanging rent”, in the DHCP member of employees portion at table piece of paper. Clicks and you’re set for existence. We haveeven can reinstall your computer working system, reboot, and there’s your motionless IP again.

Major aspects of Network Management (FCAPS):

Fault Management :

A blunder is an flusht that has a pessimistic connotation. The ambition of error managing is to be familiar with, separate, accurate and register fault that happen in the system. in addition it use tendency psychoanalysis to forecast error so so as to the system is forever obtainable. Can be recognized by monitor differentbelongings for irregular performance.

A fault or occasion occur, a complex constituent will frequently drive a announcement to the system worker by means of a proprietary or unlock protocol such as SNMP, or at smallest amount mark a communication to its cheer up for a consol member of staff serving at table to catch and log/page. This announcement is hypothetical to activate physical or routine behavior.

For example: the meeting of additional data to recognize the natural world and harshness of the difficulty or to transport endorsement equipment on-line.-

Responsibility logs are one effort used to accumulate figures to decide the provide overhaul level of personality complex rudiments, as glowing as sub-network or the total complex. These are also second-hand to resolve in fact flimsy set-up machinery that have need of further consideration.

Configuration management :

system pattern organization allow you to control change to the pattern of your system plans, similar to switch and routers. Using design organization tackle you container create changes to the pattern of a router, After that revolve the changes reverse to a preceding arrangement if the changes were not winning. Difference the preceding circumstances with a net pattern organization classification with the circumstances exclusive of a complex design organization scheme. You would create the change, with any luck identification to text what you distorted. If the change were not winning you would, at finest then contain to unfasten the changes you complete physically from your certification. At most horrible, you would be absent with annoying to keep in mind what was distorted and why.

Networks of several dimension be in a steady shape of instability. Some of the engineers accountable for the system can alter the prototype of the switches and routers at some occasion. Arrangement changes to live apparatus be able to contain overwhelming belongings on the dependability of the system and the armed forces provide by it. System arrangement supervision is intended to consent to you to get organize of set of connections changes.

Performance Of Network Management:

System and function presentation issues are rising severely unpaid to information centre consolidation, the go up of compact disk interchange, growing information of distant users, and less trends.

The most excellent method to appreciate the view so as to all presentation is family member is to inquire an important person, uses a networked organization or submission: “Be a three-second request answer occasion good quality or dreadful” The respond is, it depends. If the ordinary rejoinder point in time is ten seconds, a three-second reply occasion is extremely high-quality.

Best suggestion of how application are performing arts used for the ending customer is to calculate rejoinder period by monitor genuine transfer. Tall utilization is only a difficulty if it essentially impact request presentation.

Growing bandwidth is not a cure-all for solving presentation troubles. Compose positive you recognize the reason of the difficulty previous to attractive counteractive exploit similar to throw bandwidth at it.
Security management :

The networks are processor systems, together community and confidential, that is old on a daily basis to behavior business and infrastructure in the middle of business, management agency and persons. Networks are comprised of “nodes”, which are “customer” terminals and one or more “servers” and/or “host” computers. These are connected by message system, a number of of which strength be confidential, such as inside a corporation, and others which strength be unlock to community right of entry.

Example of a complex scheme so as to is unlock to community right of entry is the Internet, but a lot of confidential networks too use openly-easy to get to infrastructure. These days, the majority companies’ crowd computers be able to be accessed by their workers whether in their office in excess of a confidential infrastructure system, or as of their home or lodge rooms as on the street from side to side usual phone outline.

System safety involve all behavior that organization, enterprise, and institution take on to defend the worth and continuing usability of possessions plus the honesty and permanence of operations. An effectual system safety plan require identify intimidation plus after that choose the the majority effectual situate of tackle to struggle them.

Network security include to Threats:

Viruses : processor program on paper through deceitful programmers and intended to duplicate themselves plus contaminate computer at what time trigger through a exact occasion.
Trojan horse programs : Escape vehicle for unhelpful system, which show to be safe or helpful software program such because sports competition
Vandals : Software application or applets so as to reason obliteration
Attacks : counting investigation attack right of entry attack and refutation-of-repair attack (which stop right of entry to fraction or every one of a processor scheme)
Data interception : involve eavesdrop on infrastructure or changing information packet life form transmit
Social engineering : obtain secret system safekeeping in order from side to side nontechnical income, such as affectation as a practical prop up being plus ask intended for popular passwords

Network security tools :
Antivirus software packages : These letters oppose the majority bug intimidation if frequently efficient plus properly maintain.
Secure network infrastructure : Switches and routers contain hardware and software skin so as to hold up safe connectivity, border safety, interruption defense, individuality armed armed forces, and defense managing.
Devoted system safety hardware plus software-gear such as firewalls and interruption discovery system give defense intended for every one area of the system furthermore allow safe relations.
Virtual private networks : These networks supply admission be in charge of in addition to information encryption stuck between two dissimilar computer on a net. This allows distant personnel to attach to the system with no the danger of a hacker or robber intercept information.
Identity services : These armed forces assist to recognize user and be in command of their behavior and dealings on the system. armed forces comprise passwords, digital certificate, and digital confirmation key.
Encryption : Encryption ensure so as to mail cannot be intercept or understand writing by anybody additional than the official receiver.
Security management : This be the paste so as to hold jointly the additional structure block of a physically influential safety resolution.

Accounting Management:

System organization: secretarial and presentation strategy, describe the IP secretarial skin in Cisco IOS and enable you to differentiate the dissimilar IP secretarial function and appreciate SNMP MIB particulars. Learn concerning IP secretarial right of entry manage catalog (ACL) plus IP secretarial MAC talk to.

Simple Network Management Protocol(SNMP) :

Standard procedure for organization plans on IP network. plans so as to characteristically hold up SNMP comprise routers, switch, servers, workstations, printer, modem racks.It is second-hand more often than not in net running systems to monitornnetwork-emotionally involved plans for circumstances that merit directorial notice. SNMP is a part of the internet procedure set as distinct by the Internet Engineering Task Force (IETF). It consists of a put of principles for system organization, counting an request coating procedure, a file schema.

An SNMP-managed network consists of three key components:

Mechanism manage
Third party — software which run on direct international relations
NMS(Network management system ) — software which run on the superior

I preferred the tool for system is Network monitoring tool :

system monitor serve as your eye-plus-ears, alert you to some issue on your system. You be able to stay path of belongings like copier provisions, software installation, rough push room, agreement owing date now concerning no matter which you be able to believe of so as to relate to your system. The majority practical of IT pro set up their system monitor system to attentive them via electronic mail or book memo when on earth issue occur. This not simply keep them on pinnacle of possible evils so they can speak to them as rapidly as likely, other than it too prevent them from create an subject. The enormous bulk of evils so as to occur in a net are straight linked to incredible altering. The top net monitor solution resolve not just prepared you to these change, but motivation and assist you troubleshoot the matter by allow you to measure up to your network’s in progress position with come again it look like previous to the alter. This income you be able to resolve the difficulty earlier.

The instrument incessantly monitor the reply time of manifold plans and generate e-mail alerts base on the harshness. These alert are put awake at three dissimilar level and demonstrate the dissimilar rank of the nodes.

System Details Update:

This instrument come useful at what time the manager have to modernize the scheme particulars, such as scheme person’s person’s name, scheme account, and organization site in a variety of plans. The manager can primary scrutinize a variety of plans to sight the obtainable particulars plus after that can adapt as necessary and bring up to date them all at on one occasion.

Port Scanner :

This instrument scan the known port also inside a variety of IP Addresses or intended for a solitary IP to discover the rank of the scan port. The rank of the ports be able to be also listen or not listen. You be able to connect the port among the recognized armed forces, which enable you to be acquainted with the unidentified/surplus armed forces organization in the arrangement.

These are three points exemplify why your IT section be supposed to powerfully think leveraging system organization gear as fraction of their every day everyday jobs:

Suitable system direction-finding
·Network system organization security
·Receive list of your system

Monitor software:

monitor software is additional than very soon custody path of come again is life form install and uninstalled on your computer. suitable system check profits:

custody path of installation
Note which license be put to end
meaningful which hot fixes (patch to repair exact bug) are install
stay on top of pinnacle of the present condition of every one window armed forces on top of every mechanism
creation certain every one of your computers be secluded by means of an awake to day antivirus software

Bandwidth Monitoring Tool

Bandwidth check instrument provide the genuine-occasion system transfer of some SNMP machine. It provide the bandwidth practice particulars together on at border-height and at the machine-height. It use SNMP to get the bandwidth operation particulars of a system border. The bandwidth consumption of the machine display a contrast of the human being transfer of its interface

Bandwidth Monitoring Features

Meansless bandwidth monitor
Chronological bandwidth convention trend
Doorsill based alert
Monitors bandwidth, number consumption, tempo, and packet trasferred.
Sell to other countries bandwidth information to XLS format

Bandwidth Monitoring Real -time

Bandwidth Monitoring instrument provide the existing bandwidth practice of every of the SNMP-enable interface that are configured. Graphical symbol of the bandwidth custom give a apparent visibility into how a large amount bandwidth was obsessive on an detailed complex boundary

Bandwidth Monitoring Reports

The tool in calculation to the bandwidth exploitation rumor, provide the bandwidth usage trend on an hourly, daily, monthly, and yearly basis for the next parameters of a network interface:

Volume Utilization
Packet Transfers

Threshold Based Alerting

Administrators can situate threshold for bandwidth utilization and find alert and emails and the entrance rate exceed the prearranged transfer consumption limits.
This help in more rapidly remoteness of the predicament in the set of connections and consequently faster achievement.

Third Party Reporting

Monitors the group policy, such as Routers and switch, to save the bandwidth usage fine points and supplies them in the list. This numbers is interconnected and offered in the type of bandwidth hearsay.Monitors provide an opportunity to send overseas these records into a csv file, we can be fed to third-party exposure engines to breed bandwidth rumor as beloved.

TL1 protocol :

The Protocol is usually new in network management. It was formed to solve several troubles, the most considerable of which be the incongruity of policy via special proprietary protocol. Other goals for TL1 protocol included a human readable format and responsiveness to commands.

The system-to-system skills isn’t the simply advantage of the protocol. It is also being decipherable. Even though it is prepared sufficient to be parsed, it is as well reasonably easy for humans to convert. Allows a a good deal advanced plane of thoughtful through a much minor erudition curvature than was unlikely sooner than.since it was calculated to be unbolt, this protocol can be measured a predecessor to SNMP . therefore extensive as a cluster of strategy hold , their will be clever to speak, despite the consequences of company.

firstly, the procedure chains response to information. We be able to issue a authority to a TL1machine request with the intention of it statement all of its reputation sound the alarm.

References: does wireless internet work.html

http://www, link management tools.htm papers/index.html#net infra

Free Essays

Network Server Administration

Course number CIS 332, Network Server Administration, lists as its main topics: installing and configuring servers, network protocols, resource and end user management, security, Active Directory, and the variety of server roles which can be implemented. My experience and certification as a Microsoft Certified System Administrator (MCSA) as well as a Microsoft Certified System Engineer (MCSE) demonstrates that I have a thorough grounding in both the theory and practice of the topics covered in this course and should receive credit for it.

Installing and configuring servers was the subject of Installing, Configuring and Administering Microsoft Windows 2000 Server, which I took in 2001 in preparation for my initial Microsoft Certified Professional certification. This exam covered such topics as installing Microsoft Windows 2000 Server using both an attended installation and an unattended installation; server upgrades from Windows NT (the previous version) and troubleshooting and repairing failed installations. This exam also covered installing and configuring hardware devices and user management.

Network protocols were discussed during the training for the exam Implementing and Administering a Microsoft Windows 2000 Network Infrastructure, which I also took in 2001. This exam covered installing, configuring, troubleshooting and administering such protocols as DNS and DHCP, TCP/IP, NWLink, and IPSec. The training covered such aspects of network protocols as remote access policies and network routing. Security was one of the topics of this exam, as well. Network security using IPSec and encryption and authentication protocols was discussed along with the network implementation details.

Resource and end user management was one of the main topics of the Managing and Maintaining a Windows Server 2003 Environment exam, which also updated my knowledge of security, networking and utilities. The exam covered such topics as user creation and modification, user and group management, Terminal Services management and implementing security and software update services.

Security was covered in a number of exams, including Implementing and Administering a Microsoft Windows 2000 Network Infrastructure, Installing, Configuring and Administering Microsoft Windows 2000 Server and Designing Security for  a Windows 2000 Network.

All aspects of network security were covered in the various training sessions for these exams, including topics such as analysis of network security requirements in relation to organizational realities and requirements, design and implementation of such specifics as authentication policies, public-key infrastructures and encryption techniques, physical security, and design and implementation of security audit and assurance strategies. Also included were security considerations for all auxiliary services, such as DNS, Terminal Services, SNMP, Remote Installation Services and others.

Implementation of Active Directory and knowledge of varied server roles was provided by the exam Designing a Microsoft Windows 2000 Directory Services Infrastructure. The training for this exam encompassed the design and implementation of an Active Directory forest and domain structure as well as planning a DNS strategy for client and server naming. This training also included design and implementation of a number of different server types, such as file and print servers, databases, proxy servers, Web servers, desktop management servers, applications servers and dial-in management servers.

Further knowledge of Active Directory and auxiliary services was provided in the training for Implementing and Administering a Microsoft Windows 2000 Directory Services Infrastructure. This training included such topics as installing, configuring and troubleshooting Active Directory and DNS, implementing Change and Configuration Management, and managing all the components of Active Directory, including moving, publishing and locating Active Directory Objects, controlling access, delegating administrative privileges for objects, performing backup and restore and maintaining security for the Active Directory server via Group Policy and the Security Configuration and Analysis tool.

The topics covered in CIS 332, Network Server Administration, have been completely encompassed by my previous experience, training and certification with Microsoft Windows Server 2000, as well as updated knowledge gained by  training for Microsoft Windows Server 2003. I have been constantly increasing my skills and knowledge in this area for the past six years, using both training and work experience to gain certifications which prove that I have a complete grasp of all aspects of the subject matter included in this course.

Installing and configuring servers and network protocols, troubleshooting failed installations or configurations, resource and end user management, security design and management, design and implementation of Active Directory services and implementing and administering a wide variety of network server roles are all major aspects of my training and certification experience. I feel I am fully qualified for the information covered in CIS 332, and should be granted credit for this course.

Free Essays

Network Assignments

Kim Doe Jung is a commercial attaché in the Korean embassy. She works as an investment and financial consultant providing useful information and data to those interested to invest in Korea. Prior to the interview we had met at a luncheon organized in our college by the Korean Embassy. The luncheon was targeting students wishing to take their post graduate studies in a foreign country. Also invited along with students were business persons with an interest of investing in the expanding economy of Korea. Kim Doe Jung was a guest speaker and I was able to secure an interview through the help of one of my father’s friend who works in the embassy.

She is an MBA graduate from a Korean University specializing in financial matters. The mere thought of interviewing was exciting and inspiring too. She had been able to accomplish what I have always looked forward to; she has my dream career.

The interview took place inside the Korean Embassy’s expansive offices. She has a beautiful office facing the oval offices from afar. I was taken right up to her 3rd floor office by a security officer and she received me cordially which was rather flattering as I believed she had to be a very busy person.

I had a large interest in knowing what her work duties and responsibilities entail. A commercial attaché she told me was generally an agent of her own country, sent to a foreign land to represent her country’s commercial and financial affairs in that foreign land, I was hoping for a more specific answer and to get it I asked her to describe her typical average day to me.

She arrives early in the morning, the first thing she does is to update the ambassador on any developments in her field. Then businessmen and women start coming in with all manner of issues. Some would wish to enquire on the likely trend that the inflation in Korea is taking and what the government is doing about it, how their investments are doing, any viable investment opportunities available.

Koreans also drop by just for a casual visit, others have solid reasons like wishing the Korean government to negotiate for  trading concessions and low export duties for their goods. This is her typical day. Day in day out she is supposed to have answers to these questions as well as be able to analyze the recommendations she receives from the public.

Her answers enabled me to have an idea of what to expect in my career dreams and was able to get from her responsibilities the enormity of the challenges a career diplomat goes through.

To her, being a diplomat job is quite a challenge and ideal candidates for the job have to exercise diligence and good work ethics. One has to have high analytical and communication skills, be a team player, have a willingness to learn new things, physical stamina to withstand long working hours and ability to cope and interact with persons of diverse communities. This was very helpful, and this being my dream career, I was able to know the areas I needed to improve on as well as appreciating my strengths (Zachary Bromer, n.d).

The working conditions are just marvelous as I could discern from what I could see: her office was smart and exotically furnished with expensive Korean rugs, she was also expensively dressed. She told me that her job is well paying as one has to be well compensated for accepting to work overseas away from her family.

This interview, I must say, was an eye opener. It was my first interview with a person of such a high social standing and who represents interest of a far away state. Her confidence and intelligence were equally inspiring. Now I have a strong conviction to follow my intended career path, armed with the information that she gave to me.I have to act with reasonable diligence, work to improve on my strengths and weaknesses to achieve my life time goal of a career diplomat.


Zachary Bromer, contributor; Dream job: diplomat

Available online at Accessed

Free Essays

Network Infrastructure Planning

Course number CIS 408, Network Infrastructure Planning, addresses the issue of network design in both peered-network and client/server environments. The topics emphasized in this course are network topology, routing, IP addressing, name resolution, virtual private networks (VPNs), remote access and telephony. I believe that my training and experience as a Microsoft Certified Systems Engineer (MCSE) fully encompasses the topics included in this course, and I should receive work-life credit for this course.

I gained the skills and knowledge included in this course through a number of training courses for exams leading up to my MCSE certification. The main exam in this series for network infrastructure planning was Exam 70-219, Designing a Microsoft Windows 2000 Network Infrastructure, which I took in 2001.

Related article: Advantage Energy Technology Data Center Migration

In addition to the associated training, work experience consisting of one or more year’s experience designing network infrastructure in an environment with greater than 200 users, at least 5 physical locations, all typical network services including file and print servers, proxy servers and/or firewalls, messaging servers, desktop clients and remote dial-in or VPN servers, and remote connectivity requirements including remote offices and individual users, as well as connection of corporate intranet services to the Internet.

Some facets of the topics covered in this course were also covered in Exam 70-296, Planning, Implementing and Maintaining a Microsoft Windows Server 2003 Environment for MCSE Certified on Windows 2000, which I took in 2005 while gaining my Microsoft Certified System Administrator (MCSA) certification. Requirements for this exam included the MCSE certification I had gained previously, as well as experience in network infrastructure planning and user support.

Network topology planning was covered in Exam 70-219. This included considerations such as physical layout of the proposed network, LAN topology requirements, physical connectivity requirements and business case analysis for the network proposal. Current hardware availability as well as planned network growth, upgrades and user growth were discussed. Network security, both software-based and physical, was taken into consideration. I learned to both design a network topology from scratch as well as to modify an existing topology for new requirements.

Routing requirements using both TCP/IP and DHCP were also covered in these training sessions. Designing TCP/IP subnetting, implementation and optimizing TCP/IP routing strategies, as well as integrating existing systems with newly designed systems were discussed and practiced.

Name resolution using such protocols as DNS and WINS were covered in detail. I learned to create a number of different DNS designs, including a basic design, a highly-available design, security-enhanced designs. I also learned how to optimize DNS designs, performance measurement for DNS and how to efficiently deploy a new DNS system. WINS was also discussed; design strategies, optimization and performance measurement, and deployment were covered exhaustively. Multi-protocol strategies for maximum interconnectivity and flexibility were also discussed.

Design of remote access, telephony and external access strategies, including WAN (wide-area network) and VPN strategies as well as Internet connectivity, were a further topic of these training sessions and the subsequent exam. WAN design was covered from the standpoint of both dial-in and VPN access.  Dial-in remote access security was emphasized, with design considerations including Routing and Remote Access protocols and authentication with RADIUS (Remote Authentication Dial-in User Service).

VPN (virtual private network) access was discussed, with Routing and Remote Access being emphasized as well as a demand-dial strategy. The training also encompassed telephony system design considerations, including traditional telephony switchboard-based services as well as Voice over IP (VoIP) services. Connectivity to external Internet was also a focus of the training; design considerations included inbound connection control, firewalling and proxy servers and other security requirements unique to the corporation.

My training and experience as a Microsoft Certified Systems Engineer has thoroughly prepared me in the subject matter offered in this course. Formal training as well as six years experience in network infrastructure planning, including such designs as network topology, protocol configuration and monitoring, integration of telephony, remote access and outside connectivity services as well as attention to business requirements, has given me a depth of knowledge and experience in network infrastructure planning equal to or greater than the knowledge I would gain from CIS 408. I feel I am very well qualified to receive work-life credit for this course.




Free Essays

Network security and business

Company X is reputed to be the world’s leading manufacturer and supplier of sportswear (sports shoes and vestments) and sports equipments with its headquarters situated in Oregon, Portland metropolitan area. The company presently accrued 16 billion US dollar worth excess revenue in 2007 only. In the year 2008, that company is credited to have recruited 30,000 employees globally, while at the same time maintaining its status as the sole grand crown holder of the Fortune 500 title as far the state of Oregon is concerned. In this essay, the vulnerabilities experienced by the company shall be looked at in respect to network security which entails working towards the protection of information that is passed o0r stored through or within the computer.

The company was founded in 1964 and then later re branded in 1978. The company is so well established that it does not only sell its products under its own company name, but it also does so through its subsidiaries. In addition to this, company X also owns other sports companies. In an effort to realize expansion, company X extended its services to run retail stores under its name. As a result of this, the company X has approximately 19,000 retail departments in the US alone. In the same vein, the company sells its products to about 140 countries globally.

The company has been traditionally sponsoring sportsmen and women around the globe and has very distinct logo and slogans. The slogans used by this company unlike those of the fellow competitors, made it to the top five slogans of the 20th century, and was accredited for this by the Smithsonian Institution. In 1980, company X had hit the 50% market share mark in the the United States, being only 16 years old. The most recent type of inventions by this company involves the production of new models of cricket shoes which in comparison to their competitors, are 30% lighter (Bernstein, 1996).

The company seeks to maintain its vibrant market and maintains its upper hand against its competitors by producing products that are appealing to the tastes of the materialistic youth. The sports wear company produces and sells assortments used in sundry and diverse world of sports sch as basket ball, athletics, golf, American football (rugby), tennis, wrestling, skating, football and skate boarding, among others.

The company X having become a global entity, also faces many problems that come with expansionism. The troubles touch on cases of workers’ rights in relation to the occupation safety and health matters. These cases are more distributed in the developing economies than in developed ones.

Conversely, there are also issues about social responsibility that border on the environmental safety in relation to the production processes and wastes produced by the company. The problem also stretches to its outsourcing vendors, who together with the company have been challenged by critics to work towards carbon neutrality.

Critics have also dismissed as lies the claim by the company that it increased the salary scale of its workers by 95%. These critics posit that the company seeks to always exploit its workers, of whom 58% are young adults aged between 22- 24 years, while 83% of the workers are women. Half of these workers in these companies are said  to have gone through their high school educational programs.  Because few of these people have work related skills, critics maintain, the subsidiaries of company X are reported to be using this state of affairs to exploit their employees by issuing them very minimal wages (Mc Nab, 2004).

Again, it is reported that out of company X’s contract factories, 20% deal in the casual production of of the products. These factories are always bedeviled by cases of harassment and abuse (of which the company has moved in to sort out the situation by liaising with the Global Alliance in order to review the first twenty one of the most notorious factories. The company also set up the prescribed code of conduct so as to inculcate social responsibility among the workers.

Spates of continual human rights abuse nevertheless continued to persist. In Indonesia for example, 30.2% of the workers of company X are reported to have been victims of exploitation. 56% of these workers are said to have undergone verbal abuse. In about the same spectrum, 7.8% are reported to have succumbed to unwanted sexual comments, while 3.3% are said to have been abused physically. In Pakistan, the matter deteriorated to cases of child abuse and the conscription of child labor. For instance, in the same country, the issue came to the global attention when pictures were displayed, portraying children knitting football which were to be then sold by this company.

Another matter that haunts this company X is the protection of information, or commonly called by the corporate world and the computer science and management as network security. Of recent developments, concerns over privacy have soared, and become subject to public furore and debates when it was found out by security experts after conducting a research in the University of Washington, that company X’s iPod sport kit had the ability to track people . Initially, the surveillance system that works through the company’s iPod sports kit had been designed to allow the user (mainly the sports person) of this facility to take note of the calories that have been burned, the speed, the distance covered and the time dispensed when undertaking sports activities.

The kit was fitted with a transmitter that was designed to be stuck on the shoes and the iPod’s transmitter. The tracking is made possible by the fact that the transmitter relays particular ID. Although the problem first seemed minuscule due to the fact that the information could only be accessed from a 60 feet away shoe, yet it was found out later that more problems, it seemed would set in since the surveillance or tracking system was then fed to the Google maps (Sloot, 2003).

In order to bring in ameliorations in this matter, comprehensive laws are being proposed so that company X and its counterparts who use these systems can be forced to beef up security into the models- a measure which these companies are ignoring. Some related speculations are also rife that the company’s retailing contractors  are using the RFID tags for tracking their consignments and keeping track of the stock market. This problem is hydra headed since apart from the obvious fact that this may scare away potential customers, it still has exposed the company to anti company X campaigns which have widely been  activated and managed by the Caspian organization in the US.

Customers will shy away from the products since the communication system of the company X seems to have been compromised in its CIA (confidentiality, integrity and availability) of information security. Confidentiality portends that only the permitted authorities access information, while integrity ensures that information stays only within the precincts of the authorized handlers. Availability on the other hand demands that those who are authorized access information are be able to so do efficiently and quickly. The external leaking in and out of confidential information can be very costly and should always be avoided

Company X is working out to ameliorate this problem. On 5th March 2008, in Oregon, it held a meeting in which the departmental heads and subsidiary representatives met, and analyzed the extent of the vulnerability (they had already come into the board meeting, having known the nature, and the extent of the risk). As an immediate contingency, company X decided that it was going to suspend the sale of the iPod transmitters as a measure to instill risk avoidance.

Having also become aware that there was also the danger of information systems being invaded by hackers, (as was seen in the 31st July, 2007 when in Pakistan tens of its computers succumbed), consensus was arrived at that all computer systems in the organizations adopt the man- in- between technique by adopting the firewall computer security system that will be able to detect the nature of the on coming information.

On another front, the company X agreed that it was to globally look at its wireless networking: the technology that the supports the connectivity of each computer to a specific network or networks. This does not portend coming up with a new system of networking, but bolstering the configurations and the security systems. New and stronger RAMs( Random Access Memory ) were bought and have already been set in place.  This will ensure that the roiter system within the company’s area of jurisdictions are very strong and very fast in detecting anomalies (Raquet and Saxe, 2005).

The computer technicians in company X suggested that the leaking of the company’s secret information could be due to the fact that the computer connectivity in Pakistan could have been in the open mode configuration. These technicians elaborated that open computer mode connectivity allows anyone even without the building to access information from an open mode configured computer. The situations becomes more vulnerable in the present day due to the portability of the computers (laptops and palm tops).

Open mode wireless computers have a preset name that makes the computer to, on being turned on, start broadcasting packets which prompt all wireless devices within the precincts about the availability of connectivity (Albanese and Sonnenreich, 2003). However, should the computers be switched on to closed configuration, the beacon packets are no longer broadcasted by the access point.

It was also discovered that although the headquarters were already filtered, yet not all of the subsidiaries were. It is an this backdrop that the computer technicians under the aegis of the company’s information and technology department recommended that the Wireless Encryption Protocol (WEP)  be introduced to ward off even the most sophisticated hackers. Wireless Encryption Protocol ensure that the data that is being relayed is not in the readable format, but instead, it becomes only readable after being decoded in a backward manner on being received. This leaves the data unreadable on being captured in between transition since the data is still encoded. The hacker is frustrated unless in possession of the knowledge about the  original address.


As a concept, network security is very important in the realization of a company’s secret information. Good and comprehensive network security keeps secret information from flowing outwards to unwanted parties, while at the same time, enabling efficient flow of information within an enterprise. The systems of communication ( the hardware, the software and the orgware ) is also adequately protected.

Company X would accrue higher returns if it enhanced all of its network security systems within its disposal.


Albanese, J. and Sonnenreich, W. (2003). Illustrations on network security.

US: Mc Graw Hill.

Bernstein, T. (1996). Internet security designed for business.

US: Wiley.

Mc Nab, C. (2004). Assessment of network security.

US: O’ Rielley.

Raquet, C. and Saxe, W. (2005). Advocacy and governance of business network security.

US: Cisco Press.

Sloot, P. (2003). International conference of computational science.

Free Essays

The Network Operating System For Habibi’s Restaurant

Log-on securities are delicate in protecting the computer network. As a restaurant that uses computers to enhance faster communication in a more efficient and less time consuming way must be aware of certain software updates to ensure the safety of the computer services. Defined in Wikipedia (2007), the system must be using the software NOS or network operating system hence this controls networking, the messages that comes like traffic and queues when many users are using the network.

The software does not only aid in the quick access but it also does some administrative functions and has an especial function when it comes to security. Compared to the available softwares like OS’s or Windows XP, NOS run to enhance the optimum network performance and the software is commonly used in local area networks or to a wide area networks but is also applicable to a broader array of networks. NOS are based in the 5 layers of OSI reference model.

The restaurant could use the latest available NOS like Novell Netware, Windows NT and 2000, Sun Solaris and IBM OS/2 to achieve the best performance in the administrative level. Many important programs are protected by NOS like it could provide back-up for processors, protocols, automatic hardware detection and support multi-processing, security measures like authentication, authorization, logon restrictions and access control. Other featured programs are the name and directories, back-up and replication services, internetworking or routing and WAN ports. With the use of these remote access systems the administration could log on and log off efficiently. The NOS also aids in auditing, graphic interfaces, clustering, tolerance to fault and high availability system.

In using the Windows Server 2003 the Active Directory compatibility could be enhanced. There is also better deployment support when it comes to the transition like for example from Windows NT 4.0 to Windows Server 2003 and Windows XP Professional. The security services are answered by changes in the IIS web server. It is rewritten for the enhancement of security.

While the Distributed File System have many functions including the maintenance of multiple hosting of DFS single server , terminal server , active directory , print server , and other programs or services. There are new versions of Windows Server that can be used via the Remote Desktop Protocol for terminal services. This program can have a multiple functions as well as in remote graphical logins for fast performances from the distant server. The IIS as it is used in Windows Server 2003 increases the default security system because of the built in firewall that can break defaults.                                                                                                                               In

March 2005 new improvements and updates were incorporated to Windows Server 2003 like the Windows XP containing users like Service Pack 2. The following programs are the updates for Windows Server 2003. (1) There are Security Configuration Wizard that can enhance the administrator’s research in making changes and security policies. (2) Another program is Hot Patching that allows DLL, the driver and non-kernel patches for a non reboot function. (3)

The IIS 6.0 Metabase Auditing is responsible in tracking or editing of metabases. (4) The Windows XP Service Pack 2 could be effectively converted to Windows Server 2003 by using the Windows Firewall system. With package like the Security Configuration Wizard that can be used by the administrators for more manageable incoming open ports and hence it can be detected automatically because the default roles could be selected. (5) For the support of IPv6, Wireless Provisioning Services is used.

This also builds new defenses against SYN flood TCP assault. (6) Default modes can be turn on when a Service Pack 1 server is booted after its installation, and is made possible by Post-Setup Security Updates , hence it configures the firewall to barricade all incoming connections, and able in directing the user for updates installation. (7) Buffering could be prevented if (DEP) or Data Execution Prevention has to be used. The No Execute (NX) does not allow overflow especially in cases that there is an attack by Windows Server vectors.

The Windows Server 2003 R2 is the newest update with installable features for Windows Server 2003 that includes SP1. The software has many systems of function like (1) Branch Office Server that is very capable in centralization of tools in the administrator like the files and printers, enhancement of Distributed File System (DFS), the WAN data replication responsible for Remote Differential Compression. (2) The Identity and Access Management for Extranet Single Sign-On and identity federation, centralization of administration in extranet application access, automated disabling of extranet access in consideration to the Active Directory account information, the user access logging and cross-platform web Single Sign-On and or password synchronization with the use of Network Information Service (NIS). (3)

Storage Management for the File Server Resource Manager that can have a storage utilization reporting function, enhancement of quota management, the file screening limits files types are allowed and the storage Manager for Storage Area Networks (SAN) for the function of storage array configuration. (4) The Server Virtualization serves in the 4 virtual instances. (5) The SDK for UNIX utilities that gives a full Unix development environment examples are Base Utilities, SVR-5 Utilities, Base SDK, GNU SDK, GNU Utilities, Perl 5, Visual Studio Debugger Add-in.

The Windows Server 2003 contains Datacenter edition which allows an 8-node clustering that could help lessen fault tolerance. By means of clustering, the fault tolerance of server installations is boosted and is accessible. The clustering also supports the file storage that is connected to Storage Area Network (SAN). This could run in Windows and also to non-Windows Operating systems as it can be connected to other computers. To block data’s or redundancy and to achieve fault tolerance, the Windows Storage Server 2003 uses the RAID arrrays for these functions. A Storage Area Network is available in Windows Storage Server 2003 where the data’s are transferred then stored in bigger chunks and not by files. Therefore the data’s transferred are more granular, because of that there is a higher performance in database and transaction processing, while it permits NAS devices to get connected in SAN.

The Windows Storage Server 2003 R2 has a Single Instance Storage (SIS) contained in the file server to optimize high perfromance. The (SIS) can transfer or scan files in volumes moving it to the common SIS store, thereby reducing the storage bulk by 70%. As stated by Couch (2004) installation of data protection systems like the uninterruptible power supplies (UPS), redundant array of independent disks (RAID), and tape backup systems that are provided by Windows Home Server will aid in the maintenance of the network.


Wikipedia, 2007. Windows Server 2003. Retrieved on May 10, 2007.

Wikipedia, 2007. Network operating system. Retrieved on May 10, 2007.  

Couch, A. 2004. Network Design System Administration. Retrieved on May 11, 2007.

Free Essays

Network installation

Choosing a network that does not meet an organization’s needs leads directly to trouble. A common problem arises from choosing a peer-to-peer network when the situation calls for a server-based network Peer to peer networks share responsibility for processing data among all of the connected devices. Peer-to-peer networking (also known simply as peer networking) differs from client-server networking in several respects.

According to the computer specifications a peer-to-peer network is inadequate. It can exhibit problems with changes in the network site. These are more likely to be logistical or operational problems than hardware or software problems. For example users may turn off computers that are providing resources to others on the network. (Rutter, 2008). When a network’s design is too limited, it cannot perform satisfactorily in some environments. Problems can vary depending on the type of network topology in effect.

The physical topology of a network is the layout or actual appearance of the cabling scheme used on a network. Multipoint topologies share a common channel; each device needs a way to identify itself and the device to which it wants to send information. The method used to identify senders and receivers is called addressing. (Mitchel, 2008)

The term topology, or more specifically, network topology, refers to the arrangement or physical layout of computers, cables, and other components on the network. “Topology” is the standard term that most network professionals use when they refer to the network’s basic design. In addition to the term “topology,” there are other terms that are used to define a network’s design: Physical layout, Design, Diagram or Map. (Mitchel, 2008). A network’s topology affects its capabilities.

The choice of one topology over another will have an impact on the type of equipment the network needs, Capabilities of the equipment, Growth of the network and Way the network is managed. According to Rutter, a network topology needs planning. For example, a particular topology can determine not only the type of cable used but also how the cabling runs through floors, ceilings, and walls. Topology can also determine how computers communicate on the network. Different topologies require different communication methods, and these methods have a great influence on the network.

The most popular and recommendable method of connecting the cabling in the proposed computer network is the client server architecture of star topology. Here each device connects to a central point via a point-to-point link. Several names are used for the central point including the following: Hub, Multipoint Repeater, Concentrator, or Multi-Access Unit (MAU). (Microsoft MVP, 2004).

For the recommended network, the central point ought to be an intelligent hub, which can make informed path selections and perform some network management. Intelligent hubs route traffic only to the branch of the star on which the receiving node is located. If redundant paths exist, an intelligent hub can route information around normally used paths when cable problems occur. Routers, bridges, ; switches are examples of hub devices that can route transmissions intelligently. These hubs are advanced such that they are able to accommodate several different types of cables. In this case there can be a main hub (the hybrid) with other sub-hubs especially for growth purposes.

Intelligent hubs also can incorporate diagnostic features that make it easier to troubleshoot network problems. Hub-based systems are versatile and offer several advantages over systems that do not use hubs. In the standard star topology with hubs, a break in any of the cables attached to the hub affects only a limited segment of the network mostly only one workstation while the rest of the network keeps functioning.  In this kind of a system, wiring systems can be changed or expanded as needed, different ports can be used to accommodate a variety of cabling types and monitoring of network activity and traffic can be centralized. (Rutter, 2008)

The star topology has many benefits; first each device is isolated on its own cable. This makes it easy to isolate individual devices from the network by disconnecting them from the wiring hub. Secondly all data goes through the central point, which can be equipped with diagnostic devices that make it easy to trouble shoot and manage the network.

Lastly the Hierarchical organization allows isolation of traffic on the channel. This is beneficial when several, but not all, computers place a heavy load on the network. Traffic from those heavily used computers can be separated from the rest or dispersed throughout for a more even flow of traffic. According to Rutter This topology originated in the early days of computing when computers were connected to a centralized mainframe computer.

One machine can act as a server and as a client at the same time since the setup is not concerned with security. This machine should be the one with the highest processing speed (3GHz), largest Random Access memory (1 Gb) and enough disk space (120 Gb). The importance of the server is to concentrate common peripheral devices, which do not need to be in multiples in the network. This computer can meet the processing and storage needs of other users, it can be able to support many more users in cases of expansion, it also enables administration of resources centrally in cases of troubleshooting there is more consistency and reliability and it also provides backup for the other machines.

The server has many dedicated specialized functions in addition to providing basic network services. First it can be dedicated to managing network printers and print jobs to avoid unnecessary spooling. Secondly it can manage modems and other types of communication links. It can also be used to store large databases and run some database applications.

Fourthly it can run an application for the access across the network. It can act as a mail server and provide access to email services as well as sending and forwarding email messages to intended recipients in the network. Lastly a server may provide a wide range of information to the public Internet or private intranets form the network. Upgrades can be to maintain, troubleshoot, update and fix the other computers remotely. It’s way more effective than trying to explain what to do over the phone.


Topologies remain an important part of network design theory. You can probably build a home or small business network without understanding the difference between a bus design and a star design, but understanding the concepts behind these gives you a deeper understanding of important elements like hubs, broadcasts, and routes

Work Cited:

Brandley Mitchel, The New York Times Company. (2008). Wireless Networking. . Retrieved May 10, 2008 from:

Microsoft Most Valuable Profession, (2004, 1st December). Hardware and software specifications.

Daniel Rutter, (2008,1st April). Ethernet Networking. Retrieved May 10, 2008 from:

Free Essays

Importance of Computer Network Service Levels

This paper explains the importance of the different service levels of computer networking such as availability, reliability, response time and throughput. It aims to literate the people managing the network to be aware of these different key areas so that they may always be reminded of their duties and responsibilities in securing the network.

Network Computers

Networked computers have been a part of every business both big and small. People invest their time, effort and money to make sure that communication and information is always available. Information Technology as a part of any organization is crucial to a business success making it one of the most budgeted departments of the organization. People who takes care of the network should know the importance of service to customers and co-employees regarding availability, reliability, response time and throughput.

Technology is made to serve the people in the shortest time possible. The network group should always make sure that the network, workstations and other technological resources that is operational through networks are all stable and in perfect condition. Monitoring should always be a part of the network group’s daily routine to ensure that all devices are working properly to avoid any downtime. Risk management should always be implemented and observed at all time. The group should be knowledgeable enough to sustain the network in all possible circumstances and emergencies like earthquake, sudden power failure and more. The IT group with the help of the management and customers should also have an understanding in the implementation of availabilities of the business network and other resources through proper endorsements and reporting to make sure that communications and transactions will not be hampered.

The information and data gathered from computers and other resources are one of the most important tools for decision making in any business or organization thus making it one of the most sensitive to monitor as well. It is important that customers and employees trust the accuracy of the software or machines that they use. The network group’s job is to make sure that all data and information are well transferred to customers and employees everyday. LAN testing should always be a part of their daily routine to test the reliability of their network.

Response Time

Fast and accurate information and output is so important in this fast world. That is why

the word downtime must not be in an IT group’s dictionary. Network performance must always be in its best. This can be assured through testing the network every now and then. The network group should also be knowledgeable enough to design the kind of network topology and know the tools that can be used in different kinds of work environment to ensure fast performance, resilience, scalability and flexibility of the network.


Networking is at its best when they do their work as what they are expected. Processing speed is measured by users every time they work and throughput can be a kind of measurement to see if their device is doing its work well. For example, if a printer is expected to print 100 papers per hour but it seems that it’s only printing 65 papers, the time wasted in printing or downtime is a big factor for the performance of the users that may bring bad effect to their transaction as they go on. Network groups must not see this as a small problem because it may someday be an alerting one. They must have the time to check even the smallest details if they want to avoid larger problems that may come in the way.

Computer networking is one of the most challenging task that an IT or network group may have. It is the veins of success to any transaction. If the group who are responsible in making, designing and implementing networks are all knowledgeable and capable of maintaining and securing a network, then progress and good working environment will be at hand.

It is best that the network group is knowledgeable in their field. However, all of the hard works of the IT group will not be successful if they do not have the support of their co-workers and the top management.



Free Essays

Essays on Social Networking

SOCIAL NETWORKING SITES AS IMPORTANT TOOLS TO FOSTER RELATIONSHIPS Main ideas: 1. Social networking sites in schools and universities play an interesting role in order to improve abilities. 2. The advantage of social networking is reconnecting people. 3. Social networking sites offer some useful services to create a good environment among friends and family members. SOCIAL NETWORKING SITES AS IMPORTANT TOOLS TO FOSTER RELATIONSHIPS Nowadays, internet and social networking sites have become in useful tools that allow people around the world to communicate and to spread interesting information.

They have been used to support politicians during presidential elections. For this reason I do agree with the fact of considering social networking sites as important tools to foster relationships. In the field of education, these sites are very useful, since students have the opportunity to interact with others by planning and working on school assignments. Teachers on their part also find internet as an interesting tool, using it in their classes realizing that it will be helpful to improve students’ skills and also to foster their relationships and create new ones.

On the other hand, it would be important to mention that one of the advantages of these sites is that they give people the chance of reconnecting with friends and family members that have not been in touch with for a long period of time; allowing them to strengthen and build good relationships. Besides that, the social networking sites, offer some services, such as free messaging, photo storage, games among others; that people can use to spend their free time and also to share memorable events with the family and friends.

This aspect is very important when looking for a good environment with family members and friends. As a conclusion, it would be relevant to mention that one of the purposes of the social networking sites is to give spaces of interaction to people and also to let them get informed about interesting and updated topics. These two aspects build and foster their relationships with the society through the communication.

Free Essays

Essay on Social Networking Sites

Essay on Social Networking Sites Social networking sites peaked the year 2007. These sites encouraged online social connections. Early sites such as SixDegrees. com and Friendster allowed people to manage a list of friends. One drawback to these sites was that they did not offer users the ability to publish content like blogs. Social networking sites begin with a group of founders sending out messages to friends to join the network. In turn the friends send out messages to their friends, and the network grows. When members join the network, they create a profile.

Depending on the site, users can customize their profile to reflect their interests. They also begin to have contact with friends, acquaintances, and strangers. Founded in 2002, Friendster used the model of friends inviting friends to join in order to grow its network. It quickly signed on millions of users. Unfortunately, as the site grew larger, technical issues surfaced. Painfully slow servers made it difficult for users to move around the site. Additionally, management enforced strict policies on fake profiles. These false profiles, or “fakesters,” as they were known, were deleted by the site.

This approach turned off users. Eventually, Friendster began to lose members in the United States. Fellow networking site SixDegrees. com closed its doors after the dot-com bust in 2000. Within a few years, these early social networking sites found their popularity declining. At the same time, a new social networking site called MySpace was beginning to take off. THE RISE OF MYSPACE MySpace brought together the social features of networking sites and the publishing capabilities of blogs. The combination of the two tools struck a home run with teens. Young people were looking for a more social way to blog.

MySpace provided the solution. In 2003 Tom Anderson and Chris DeWolfe launched MySpace in Santa Monica, California. As music fans, the pair designed the site as a place to promote local music acts. They also wanted to be able to connect with other fans and friends. On MySpace, users created a Web page with a personal profile. Then they invited other users to become their friends. According to DeWolfe, the bands were a great marketing tool in the beginning. He said: “All these creative people became ambassadors for MySpace by using us as their de facto promotional platform.

People like to talk about music, so the bands set up a natural environment to communicate. “1 Anderson and DeWolfe were determined to keep MySpace an open site. Anyone could join the community, browse profiles, and post whatever they wanted. User control was one of their founding principles. It also made initial financing hard to find. According to Anderson: “We’d get calls from investor types who wanted to meet us. They would say ‘Your site isn’t professional. Why do you let users control the pages? They’re so ugly! ‘”2 In the meantime MySpace continued to sign people up. Teens and young adults loved the site.

They flocked to create their own profiles. The ability to customize pages, load music, and share videos added to the MySpace appeal. Unlike other early social networking sites, MySpace gave users a media-rich experience. Users could express themselves on their Web page by adding music and video clips. At the same time, they could socialize with friends. MySpace made social contact easier with tools such as e-mail, comment posts, chat rooms, buddy lists, discussion boards, and instant messaging. MySpace brought together the ability to express oneself and to socialize in one place.

The timing was perfect. Over the next two years, MySpace grew at a tremendous pace. The site’s success brought attention from investors. Rupert Murdoch, famous for his media empire, wanted to buy MySpace. Murdoch had interests in television, film, newspapers, publishing, and the Internet. In 2005 Murdoch purchased MySpace for an amazing $580 million. By early 2008 MySpace had grown to a mind-blowing 110 million active users. It signed an average of thirty thousand people up every day. One in four Americans was on MySpace. The Web site had become the giant among social networking sites.

It was the most trafficked site on the Internet. MySpace’s influence traveled outside of the United States. The company built a local presence in over twenty international territories. MySpace could be found in places such as the United Kingdom, Japan, Australia, and Latin America. In a few short years, MySpace had become a worldwide cultural phenomenon. SOCIAL NETWORKING BEYOND MYSPACE The success of MySpace in the social networking arena spurred the development and redesign of many other online social networks. Some sites appealed to a general audience.

Others, such as Black Planet, LinkedIn, and MyChurch, sought to serve a niche market. Facebook was one site that emerged as an alternative to MySpace. In February 2004 Harvard student Mark Zuckerberg launched Facebook. The site began as a closed network for college students. Closed networks only allow users to join if they meet certain criteria. In contrast, sites such as MySpace and Friendster were open social networking sites. Anyone could sign up for an account. Open and closed social networks have advantages and disadvantages. Open networks foster interaction between adults and teens.

Parents can check up on their teen’s profile and decide if they are comfortable with their child’s online image. On the other hand, open access means that profiles are completely public and can attract unwanted attention. Closed networks are generally smaller. As such, there is a greater chance a user will know other members both online and offline. But a closed network blocks parents from reading their teen or college student’s profile. Being closed also limits a social network’s ability to grow and attract new users. As a closed college network, Facebook grew by adding more colleges to its network.

By the end of 2004, Facebook had almost 1 million active users. As Facebook’s popularity grew, it expanded beyond colleges to high school and international school users. At this point, however, the site was still restricted to a limited pool of student users. In 2006 Facebook made a pivotal decision. It opened the network to the general public, expanding beyond its original student base. By May 2008 Facebook boasted over 70 million active users. At that time, it was the second-most trafficked social networking site behind MySpace and the sixth-most trafficked site on the Web.

As an alternative to MySpace, Facebook’s social network gained popularity with business professionals and colleagues. Facebook’s purpose was to help users connect online with people that they already knew offline. Unlike the wild-looking pages found on MySpace, Facebook promoted a clean, orderly online experience. VIDEO- AND PHOTO-SHARING SITES Online social networking evolved into a full multimedia experience with the arrival of video- and photo-sharing Web sites. Users could upload visual content to share with friends and other users. Photo-sharing sites such as Flickr enabled users to transfer digital photos online to share with others.

Users decided whether to share their photos publicly or limit access to private groups. Users could also use the site’s features to organize and store pictures and video. One of the most popular video-sharing Web sites was YouTube. The site, founded in 2005, used Adobe Flash technology to display clips from movies and television, music videos, and video blogs. Users could upload, share, and view video clip topics from the latest movies to funny moments captured on film. Not everyone wanted to create a profile, write a blog, or upload pictures and video.

Other social networking tools allowed these users to participate online. E-mails sent messages to a friend’s electronic mailbox. Instant messaging was a real-time conversation between two people online at the same time. Comment posting allowed users to interact and talk about a friend’s blog, profile, or pictures. Even online gaming was a form of social networking, allowing players to meet other people with similar interests online. WHY IS ONLINE SOCIAL NETWORKING SO POPULAR? The popularity of online social networking has prompted researchers to explore the similarities between online social networks and tribal societies.

According to Lance Strate, a communications professor at Fordham University, social networks appeal to people because they feel more like talking than writing. “Orality is the base of all human experience,” said Strate. “We evolved with speech. We didn’t evolve with writing. “3 Irwin Chen, an instructor at Parsons design school, is developing a new course to explore oral culture online. He agrees with Strate. “Orality is participatory, interactive, communal and focused on the present,” he says. “The Web is all of these things. “4 Michael Wesch teaches cultural anthropology at Kansas State University.

He studied how people form social relationships while living with a tribe in Papua New Guinea. He compared the tribe to online social networking. “In tribal cultures, your identity is completely wrapped up in the question of how people know you,” he said. “When you look at Facebook, you can see the same pattern at work: people projecting their identities by demonstrating their relationships to each other. You define yourself in terms of who your friends are. “5 Despite the connections between social networks and tribal cultures, significant differences exist.

In tribal societies relationships form through face-to-face contact. Social networks allow users to hide behind a computer screen. Tribal societies embrace formal rituals. Social networks value a casual approach to relationships. Millions of people across the world have joined online social networks. Perhaps their popularity stems from our innate desire to be part of a community. According to Strate, social networking “fulfills our need to be recognized as human beings, and as members of a community. We all want to be told: You exist. “6

Free Essays

Local Study About Social Networking

TOPIC: CORRELATIVE ASSESMENT OF REALITY TELEVISION AND SECONDARY STUDENTS VALUES FORMATION IN STO. NINO FORMATION AND SCIENCE SCHOOL DURING S/Y 2012-2013 CHAPTER 1 THE PROBLEM AND IT’S BACKGROUND Introduction: The world today Is being controlled by the technology. With all the various types of new inventions and gadgets. People are slaves of all the product of the intelligence of mankind. People follow the trends of the world, whatever is new, people do follow. The influence of media Is a very big destruction to humankind. The invasion of new television programs are trending especially to the teenagers.

Reality television began in 1948 with Allan Funt’s TV series Candid Camera. Reality Television is a television programming that presents purportedly unscripted melodramatic or humorous, situation, documents, actual events ,and usually features ordinary people instead of professional actors. Reality television represents the life of rich high class individuals who thrive off drama, materialistic items and fame. Girls are being very liberated and show off their interest on men, or they do the first step instead of the guy moving first. Reality television shows series as an entertainment purpose to all of it’s viewers, young or old.

Producers want viewers to think and believe that these shows are not scripted. Most of the individuals are most of the times very selfish, childish and materialistic. When people watch reality television programs, they indicate to think that what they are seeing or what they are watching are true to life. And because of that, they believe that what they see on TV is what life really is. Viewers of reality television who are addicted to these daily programs often get deeply involved into any situation. Often, certain reality television shows are based on topics that have no thought process or concepts.

The audience thus gets hooked on to television shows, which do not really have any intelligent concept. For example, these shows often highlights constant fights or disagreements between a group and even telecasts certain moment not suitable for viewing for a family audience. However, some shows may even show positive things, which viewers can learn and apply in their daily life. For example, a person cal learn about teamwork or be motivated in life to achieve their goals or even chase a dream. It Is a problem because reality television programs, are not exactly real life on camera.

Rather, the shows are edited and scripted in being a melodramatic television show to make it more interesting and more exciting. The producers edit and script these to show to make It to have more conflict, more danger, more of negative things.. History has shown that when a mass of people can easily be controlled by a single person or a group of people that results to causing of grave harm. The influence of reality television characters, especially those who are teenagers, because they are influencing teenagers very effectively, especially with daring segments of the television programs.

Especially, to think that this is a reality television show. Reality television is not really reality. Unluckily, many people think that It Is. These television shows draws hundreds, thousands, and even millions of viewers from all ages because It Is entertaining. It has been the focus of so much criticism because of doubtful honesty messages of some of the shows depict. Unrealistic expectations. The late novelist Kurt Vonnegut once described media in terms that may apply to reality shows.

He explained how TV and movies have caused people to expect reality to be much more dramatic than it really is: “… because we grew up surrounded by big dramatic story arcs in books and movies, we think our lives are supposed to be filled with huge ups and downs. So people pretend there is drama where there is none. ” Nothing proves Vonnegut’s theory like America’s love for reality TV. Shows such as “The Real World” and “The Hills” are filled with over-dramatic fights and intrigue. But unlike books and movies before them, reality TV claims to be representative of real life.

This helps people believe more than ever before that life should be full of dramatic ups and downs that don’t really exist. Enjoying misfortunes for others. Waite also expresses the fear that reality shows such as “Temptation Island” bring out viewers’ attraction to mortification. “Temptation Island” revolved around trying to get monogamous couples to be unfaithful. Waite says of heavy watchers of these sorts of shows, “They expect it’s OK to humiliate and to be humiliated by others, instead of thinking there’s something wrong with this behavior. ” The worst human behavior.

Psychologist George Gerbner and Larry Gross of the University of Pennsylvania developed the “cultivation theory,” which asserts that prolonged exposure to television can shape viewer’s concept of the world. Basically, the more television someone watches, the more he will believe the world is as it’s presented by the TV. I can see how this might apply to older generations who didn’t grow up in the Information Age. Take my grandpa, for example. He watches nothing but the news and he’s convinced the world is a violent and dangerous place. I’d probably think that, too, if all I watched were reports of thefts, shooting, and terrorism.

I wonder if the “cultivation theory” applies to reality TV shows. If I did nothing but watch “Big Brother” all day, would I start to believe there were cameras scattered throughout my home and my family was conspiring to vote me out of the house? Voyeuristic Urges The idea that reality TV nourishes voyeuristic behavior sounds like a great argument. Who would want to raise a society of Peeping Toms? Thankfully, this criticism has no merit. Voyeurism is, by definition, “the practice of obtaining sexual gratification by looking at sexual objects or acts, especially secretively. ” The key word here is secretively.

All voyeuristic pleasure is removed if the person being watched knows she’s being watched. A threat to intelligence Reality TV critics claim that these shows pander to the ill-witted and somehow manage to make the rest of us dumber for watching. I don’t think it’s possible to lose brain cells or cognitive functioning simply from tuning into a TV show. I think a far greater concern for critics is the sense of superiority viewers derive from watching reality TV. The truth is many people watch these shows to feel better about their own lives. What does that say about our society’s ability to promote a healthy self-image?

Entertainment Critics of reality TV argue that television should be used to education, inform, and enlighten viewers. I agree television is an excellent medium for teaching, decimating information, and promoting the arts, but it is also a vehicle for entertainment. It’s a way to peer into another world for amusement and fun. Television offers viewers a needed break from the daily pressures of life; it’s a healthy occupation for the mind. These are just some of the reasons why we are really decided to study about this topic. It Is a stepping stone for us for all the teenagers out there, especially with the secondary students here in SNFSS.

This Is to prevent bad things to happen. And open up there eyes in what they know and see about reality television programs. It Is not that when we enjoy, what we see or what we do, you think that It Is right. But It’s not, sometimes, It’s really easier and more fun to the bad or wrong things, especially Now a days. Technology Is getting better and better, and the media invades the world, It conquers peoples mind and beliefs. The goal that we wanted to achieve in studying this topic Is to open everyone’s eyes in what reality television programs Is all about.

We carry on this analysis and research because we are craving to know what are the positive and negative effects of TV programs on the values formation of the secondary students In SNFSS during SY 2012-2013, and the possible effects and feedback of reality TV programs. Theoretical Framework Kohlberg’s theory of Moral development, Is a theory based upon research and interviews with groups of young children. A series of moral dilemmas were presented to these participants and they were also interviewed to determine the reasoning behind their judgments of each scenario. Kohlberg as not interested so much In the answer to the question of whether Heinz was wrong or right, but In the reasoning of for each participant’s decision. The responses were then classified into various stages of reasoning in his theory of moral development. Level 1- Stage 1 (Obedience and Punishment) The earliest stage of moral development Is especially common in young children, but adults are also capable of expressing this type of reasoning. At this stage, children see rules as fixed and absolute. Obeying the rules Is important because It Is a means to avoid punishment.

Stage 2 (Individualism and Exchange) Children and adults account for individual points of view and judge actions based on how they serve argued that the best course of action was the choice that best-served Heinz’s needs. Reciprocity Is possible at this point in moral development but only If It serves one’s own. Level 2-Conventional Morality; Stage 3(Interpersonal Relationships) Often referred to as the “good boy- good girl” orientation, this stage of moral development Is focused on living up to social expectations and roles.

There Is an emphasis on conformity, being “nice”, and consideration of how choices influence relationships. Stage 4(Maintaining Social Order) At this stage of moral development, people begin to consider society as a whole when making judgments. The focus Is on maintaining law and order by following the rules, doing one’s duty, and respecting authority. Level 3- Post Conventional Morality; Stage 5( Social contract and Individual Rights) At this stage, people began to account for the differing values, options and beliefs of other people. Roles of law are important for aintaining a society, but members of the society should agree upon these standards. Stage 6(Universal Principles) Kohlberg’s final level of moral reasoning Is based upon universal ethical principles and abstract reasoning. At this stage, people follow these internalized principles of justice, even If they conflict with law and rules. Conceptual Framework Input Throughput/Process Output Results of the survey form the secondary students. 1. Observation on the ongoing survey. Profile Variables: 1. Surveys about the given problems from Grade 7 to 4th year High school. . Comparisson of watching and not watching of reality television programs. Compare about the difference between the answers of the grade 7 and 3rd year, 2nd year and 4th year High school students. Surveys on 10 to 15 people Grade 7 and 3rd year HS 2nd year and 4th year HS Figure 2: Conceptual Framework of the study depicting the profile variables and the results of the survey from the secondary students The first box on our figure is the input. We have our profile variables, the first one Is having surveys about the given problems from Grade 7 to 4th year High school.

And the second one Is the comparison of watching and not watching reality TV shows. These are the required or available data to be used in our surveys with the secondary students of SNFSS. The second box on our figure Is the throughput or the process. Our process would be like this. We will conduct a survey from 10 to 15 people of the grade 7 and 3rd year high school students and we will be observing and comparing It with the result of the survey of the students from the 2nd year and the 4th year High school. We will be comparing the differences between the 2 sets of batch of students.

The third box on our figure Is the output and In It Is the results of the surveys. Hypothesis (Null Hypothesis) There Is no unnecessary effect of watching reality TV programs In the values formation of the secondary students of SNFSS during SY 2012-2013. Statement of the Problem 1. What Is the most commonly reality television show being watched by the secondary students of SNFSS (SY 2012-2013)? 2. How does It help you In your daily living in school or at home? 3. What are the advantages of watching different reality TV programs? 4. What are the disadvantages of watching different TV programs?

Scope and Delimitation This study Is conducted and done to the secondary students of the Sto. Nino Formation and Science School during the school year 2012-2013 to know If there Is a unnecessary effect of watching reality TV programs In the values formation of the secondary students. Pinoy Big Brother Is one of the most famous reality TV show here In the Philippines, and next to It is the Survivor Philippines etc. Our main focus In our study Is to know the advantages and the disadvantages of reality TV programs In the Values formation of the secondary students here In SNFSS.

Free Essays

Network Programming Project Report

Project Report 1155028688 Wang Cong Overview This project is a practice of Windows socket programming. In this program we need to complete the following things: 1. Establish a TCP connection to the server. 2. Create a TCP socket listening on a port for the ROBOT program. 3. Create a UDP socket for receiving packets 4. Send and receive messages via TCP sockets 5. Send and receive messages via UDP sockets 6. Compare the throughputs in different buffer sizes. This program is programmed by C++. Program Design I defined the following functions to complete the tasks: void OnError () This function is used to print an error message and call WSACleanup(). It is designed to simplify the clean-up steps when error occurred. * SOCKADDR_IN *CreateSocket (SOCKET &s, u_short port, int type = IPPROTO_TCP, bool isServer = false) This function will create a socket for TCP/UDP connection Parameters: s—-Reference to the socket port—-The port on which to create the connection type—-The connection type(TCP/UDP) isServer—-Set it true to establish a TCP server socket and make it to listen from the port, and false to establish a TCP client socket.

Return: return the SOCKADDR_IN pointer pointed to a SOCKADDR_IN object. NULL if any error occurred, and then the OnError() function will be called * Int sendTCP (SOCKET &s, const char *m) This function will send out a TCP message over the stream socket Parameters: s—-Reference to the socket m—-The message to be sent Return: SOCKET_ERROR if any error occurred. , else return bytes sent. * Int recvTCP (SOCKET &s, char *buffer, int len) This function will receive a TCP message from a stream socket Parameters: s—-Reference to the socket uffer—-The buffer to receive a message len—-The buffer length Return: SOCKET_ERROR if any error occurred or the buffer is a NULL pointer, else return bytes received These functions are designed in order to make the program more clearly. More detail is written in the program. Bonus Part: Testing The Relationship Between Throughput And Buffer Size Figure [ 1 ] Testing On Localhost Figure [ 2 ] Testing On 100M LAN Figure [ 3 ] Testing On 802. 11b Wireless Network We can see the following conclusions from the figures above: 1.

When testing on localhost, the throughput will increase when buffer size increases, and after a particular buffer size, the throughput will decrease 2. When testing on 100M LAN, the throughput will increase and approach the maximum speed(100Mbps) 3. When testing on Wireless Network, the throughput is relatively stable. Because the speed of wireless network is relatively slow. But the throughput is not reaching the highest speed, I think it is because of a higher delay than it was in LAN and localhost environment.

Free Essays

The Use of Social Networking Sites

By Ogechi Ebere By Ogechi Ebere Their Advantages, Abuses and Dangers. Their Advantages, Abuses and Dangers. The Use of Social Networking Sites The Use of Social Networking Sites Introduction: Human beings by and large are social. They feel an inherent need to connect and expand their connections. There is a deep rooted need among humans to share. In the past, due to geographical distances and economic concerns, connections between people were limited. A social network is made up of individuals that are connected to one another by a particular type of interdependency. It could be ideas, values, trade, anything.

Social networks operate on many levels. Initially social networking happened at family functions where all relative and friends would conglomerate under one roof. Social networking has always been prevalent; it is just that in these times the face of social networking has changed. Where earlier the process was long drawn, involving a chain movement where in one person led to another through a web of social contacts, today the process is highly specialized. I’ll introduce to you the most must-know advantages and disadvantages, dangers of social media so you’ll be aware of how to use it in the safest and most valuable possible ways!

Advantages of Social Networking Sites: 1) Low Cost Communication (essentially free) If you go on to social networking sites such as Facebook, Bebo or MySpace, you can send messages back and forth to multiple friends at once, absolutely free from charge (apart from the cost of actually running the internet, and computer etc. ). 2) Making New Friends You are given the opportunity to make new friends via such sites, whether that be ‘suggested friends’ (a friend of a friend) or online relations that can be formed due to a shared interest or hobby. 3) The Ability to Upload Videos and Images

Most Social Networking platforms have the capability to allow you to upload particular media. So a wide selected audience can view your pictures and videos. Saving you having to send images and videoes directly to each person you want to see them, instead they can simply pop over to your account profile and view them. 4) The Ease of Setting Up Events Facebook allows you to create events, which is an online organised meeting to do something (in the real world) with a set time and place, so via Facebook you can offer invites to the event and make announcements etc.

Again saving time, as you aren’t having to go around communicating to everyone individually. 5) Sharing Knowledge Social networking sites give you the capability to share information with ease, and by doing this Julia Porter states that people are able to “increase both their learning and their flexibility in ways that would not be possible within a self-contained hierarchical organisation” – this particular statement was is in regards to passing information around scientists, however it can also be applied to other organisations also. ) Finding Old Friends Social networking is a great tool to reunite with friends, with social networking sites such as “Friends Reunited”. Where a simple sign-up and filling in a few details – you have just allowed yourself to be found by old friends. 7) Tools for Teaching As students are using a wide range of social networking sites already, teachers have taken advantage of this. Teachers have started to bring up online academic discussions (through threads and chat rooms) – for their students to participate in.

Social network platforms also provide teachers with the ability to help students out with homework and communicate with parents. 8) Pursuing Jobs and Work Experience Twitter in particular is a great tool for this, tweeting that you are interested in a particular job or internship could be a great step to actually securing one. As your followers may not actually have a opportunity for you, but they may know people who do – so you don’t only tap into your network, you tap into the network of the people who are following you (i. e the friend of the friend’s network).

Disadvantages of Social Networking Sites: 1) The Invasion Of Privacy It has been addressed on many occasions in the news and in the press, that we are giving away too much ‘personal information’ about ourselves, and that this is leading us to becoming vulnerable to the likes of identity theft etc. 2) Reducing Worker Productivity There has been evidence to suggest social networking sites are harming businesses. Their employees are wasting time right throughout the day by participating in social networking sites rather than actually working.

It has been stated that Facebook alone is accountable to wasting more than ? 130 million a day in the UK. 3) How Much Do Social Networking Sites Know? Perhaps social networking sites have learnt a bit too much information for comfort, Facebook knows via their program ‘Facebook Beacon’  that analyses our natural online behaviour – how long we are on the internet for, how often we visit certain websites etc. They monitor your activity even when you aren’t actually even logged in to Facebook! 4) Potential to Cause Harm

There has been many reported cases, where fake accounts have been made, that lead to horrific tragedies. Such as in October 2006 a fake MySpace account was created that was given the name of Josh Evans that was closely linked to the suicide of Megan Meier. 5) The Case of Cyber-Bullying As many young teenagers are using social networking sites as a form of communication, this just provides bullies with another opportunity to traumatise their victims. With few limitations from social networking sites to what people can actually post, bullies have the ability to publish offensive images and comments. ) “Trolling” These are rather common occurrences where individuals will post within a social network, to either annoy or spark a reaction through a post or general comment – these people are often referred to as Trolls. Not really bullying, more being a nuisance on the social network. 7) Causing a Lack of Personal Communication There is a concern over people becoming so reliant upon the convenience of social networking sites that they aren’t actually using ‘real-life’ verbal skills and they losing out on social intimacy with other people. ) Psychological Issues Studies have been conducted where the results suggest that people are becoming addicted to social networking sites – e. g. a case of fourteen year old spending over eight hours day on Facebook etc. . There is also evidence to suggest that these sites can cause a person to feel ‘lonely’. The Dangers Of Social Networking Sites: There are many inherent dangers of social networking sites because of the way the websites work. One of the biggest dangers is fraud, sometimes having to do with identity theft.

Because these sites are based on friends and the passing along of bits of personal information, thieves realized the potential instantly. There are endless social networking scams that crooks can try to pull off with this medium and we have only seen the “tip of the iceberg” so far. The newest mainstream social network is twitter. It’s based on people following others and getting to read their tweets of 140 characters or less. This is one of the dangers of social networking sites because many people want as many followers as possible and they aren’t shy about what they say in their tweets.

This highlights the trouble with Twitter and many other social networking sites. Many people’s goals on these sites is to have as many friends as possible and they just don’t think before they message or add friends. Unfortunately, this sets them up to be victimized by one scam or the next. The biggest social network in the world is facebook. Started in 2004 by a Harvard student, this site has had a meteoric rise. Facebook has become a huge software platform that houses every application imaginable and millions of games and groups and users. This brings us to another one of the dangers of social networking sites.

With the goal of becoming bigger than big, can these sites really protect the average users while on their site? Yes, this should be up to the individual user, but certain things cannot be controlled by the user and when the site has 200 million users (100 million log in everyday! ), how much resource can be used for protecting clients of the site? With so many people logged in everyday that contribute personal information constantly, the crooks have followed and committed ”Facebook identity theft” to get what they need. There are truly endless scams they have tried and will try to pull off on the social networks.

Teen Social Networking By The Numbers: * 51 Percentage of teens who check their sites more than once a day. * 22 Percentage who check their sites more than 10 times a day. * 39 Percentage who have posted something they later regretted. * 37 Percentage who have used the sites to make fun of other students. * 25 Percentage who have created a profile with a false identity. * 24 Percentage who have hacked into someone else’s social networking account. * 13 Percentage who have posted nude or seminude pictures or videos of themselves or others online. acebook identity theft like any other online identity theft, can be extremely dangerous for teenagers. With their brain still developing, it is very easy to take advantage of them by pretending to be someone else (usually a crush). There have been many cases like this and it devastates the teen when the criminal reveals that it was a hoax. Many teens have fallen into deep depression or even lost their life, an enormous tragedy and one of the biggest dangers of social networking sites. Be Careful! Be Careful! There are thousands and thousands of facebook impostors out there looking to make an easy buck or harass people they know.

Free Essays

An Analysis of Project Networks as Resource Planning Tools

An Analysis of Project Networks as Resource Planning Tools| Usage and availability of resources are essential considerations when establishing Project Networks in Resource Planning. This analysis has focused on some of the risks of certain actions used to offset resource constraints, advantages/disadvantages for reducing project scope, and options/advantages/disadvantages for reducing project duration. If implemented correctly, careful consideration of the outlined risks will make managing a project a little less painless. | Following is an analysis of project networks as resource planning tools.

The analysis will be segmented into three topical areas to include: * Risks associated with leveling resources, compressing, or crashing projects, and imposed durations or “catch-up” as the project is being implemented; * Advantages and disadvantages for reducing project scope to accelerate a project and what can be done to reduce the disadvantages * Three options for reducing project duration and advantages and disadvantages to these options Risks Associated with Leveling Resources, Compressing, or Crashing Projects, and Imposed Durations or “Catch-Up” The text (Gray and Larson, 2008) gives good definitions for the risks associated with certain actions used to offset resource constraints. The act or process of evening out “resource demand by delaying noncritical activities (using slack) to lower peak demand” (Gray and Larson, 2008) is considered leveling resources.

This action ultimately increases the resource utilization, which is more than likely the desired result. Even though one may get the desired results resource-wise, leveling resources often results in pushing out the end-date of a project. In most cases, that is the extreme outcome. Another risk that bears its head when slack is reduced, is loss of flexibility which equates to an increase in critical activities. Without slack anywhere in a project network, ALL activities become critical. This means that everything has to fall perfectly in place in order to stay on the prescribed timeline. Compressing a schedule means that you will be conducting project activities in parallel. Compressing is not applicable to all project activities.

A good example can be seen if you have activities labeled “Hire Workers” and “Dig Foundation”. You can’t implement the “Hire Workers” and “Dig Foundation” activities in parallel because to dig a foundation you need to have someone to do the digging. (brighthub. com/office/project-management/articles/51684. aspx#ixzz0ongX7ECF, 20 May 2010). Risks of compressing include: * Increases risk of rework * Increases communications challenges, and may * Require more resources Crashing a schedule involves allocating more resources so that an activity can be completed on time or before time, assuming that by deploying more resources the activity can be completed earlier.

One good aspect about crashing a schedule (just like compressing), you do not need to crash all activities. The activities that impact the schedule are those with no slack, thus being the only ones that are affected. Risks associated with this action are as follows: “Budget: Since you allocated more resources, you will not deliver the project on-budget. Demoralization: Existing resources may get demoralized by the increase in people to complete activities that were originally assigned to them. Coordination: More resources translates to an increase in communication challenges” (brighthub. com/office/project-management/articles/51684. aspx#ixzz0onfuKUmj, 20 May 2010).

These risks combined or by themselves can ultimately pose the overall risk of reducing the effectiveness of the existing resources. Advantages and Disadvantages for Reducing Project Scope to Accelerate a Project and what can be Done to Reduce the Disadvantages Reducing the scope of the project can lead to big savings both in time and costs. It typically means the elimination of certain tasks. At the same time scaling down the scope may reduce the value of the project such that it is no longer worthwhile or fails to meet critical success factors. An advantage to reducing project scope is the project is more likely to stay on schedule and on budget. It also allows for more focus being applied to the remaining deliverables in the project scope.

A disadvantage that may arise is loss of quality in work due to key quality deliverables selected to be cut in order to balance the timeline of the project. The key to offsetting the disadvantages is “reassessing the project requirements to determine which are essential and which are optional. This requires the active involvement of all key stakeholders. More intense re-examination of requirements may actually improve the value of the project by getting it done more quickly and for a lower cost. ” (just answer. com 21 May 2010) Three Options for Reducing Project Duration and Advantages and Disadvantages to these Options Reducing the duration a project can be managed by reducing the duration of an activity/activities almost always results in higher direct cost.

When the duration of a critical activity is reduced, the project’s critical path can be change with other activities and that new path will determine the new project completion date. Following are three options to reducing project duration. Adding Resources: This is a popular method to reduce project time by assigning additional staff and equipment to activities-if it is assessed appropriately. The activities at hand need to be researched accordingly and proper determinations of how much time will be saved prior to just throwing bodies at it. The first thing that comes to mind when you add resources is “double the resources, reduce the length of the project in half.

The unforeseen disadvantage that arises is the increase in the amount of time that an existing team member must spend in explaining what has been done already and what is planned. This increases the overall communication time spent by the team which phenomenally ends up adding/losing valuable time. Outsourcing Project work: A common method for shortening the project time is to subcontract an activity. The subcontract may have access to superior technology or expertise that will accelerate the completion of the activity (Gray and Larson, 2008). Additionally, significant cost reduction, and flexibility can be gained when a company outsources (Gray and Larson, 2008).

Disadvantages that may be experienced are conflict due to contrasting interpersonal interactions and internal morale issues if the work has normally been done in-house (Gray and Larson, 2008). Scheduling Overtime: The easiest way to add more labor to a project is not to add more people, but to schedule overtime. The www. businesslink. gov outlines potential advantages of using overtime working include: * a more flexible workforce * the ability to deal with bottlenecks, busy periods, cover of absences and staff shortages without the need to recruit extra staff * increased earning for employees * avoidance of disruption to jobs where the workload is more difficult to share, e. g. ransport and driving * the ability to carry out repair and maintenance which has to be done outside normal working hours However, disadvantages may include: * the expense of premium overtime rates * inefficiency if employees slacken their pace of work in order to qualify for overtime * regular long working hours, which can adversely affect employees’ work, health and home lives * fatigue, which may increase absence levels and lead to unsafe working practices * employee expectations of overtime, leading to resentment and inflexibility if you try to withdraw it. (businesslink. gov, 22 May 2010) Conclusion Usage and availability of resources are essential considerations when establishing Project Networks in Resource Planning.

This analysis has focused on some of the risks of certain actions used to offset resource constraints, advantages/disadvantages for reducing project scope, and options/advantages/disadvantages for reducing project duration. If implemented correctly, careful consideration of the outlined risks will make managing a project a little less painless. References Brighthub. com. Difference Between Schedule Crashing and Compressing, Retrieved 20 May, 2010 http://www. brighthub. com/office/project-management/articles/51684. aspx#ixzz0onfuKUmj Brighthub. com. When to Crash or Compress a Schedule, Retrieved 20 May 2010 http://www. brighthub. com/office/project-management/articles/51684. aspx#ixzz0onfuKUmj

Read also: Conveyor Belt Project

Free Essays

Network Design Project

Situation in which the Project Exists: This project is for a residential data communication network. The proposed network is designed to connect 2 workstations and 1 printer. It will provide internet access as well as multiple email addresses. The client has approved an initial investment of $5,000 to implement the networking project. The two workstations will be two HP Pavilion Laptops with AMD Turion II Dual Core Mobile processor, each for $529. 99. The printer will be a PIXMA Wireless multifunction printer/copier/scanner for $99.

The client desires to have a mobile network with the ability to work virtually anywhere in the house. The laptops both come with internal wireless adapters, 500gbytes of hard drive space, and they feature 2 processing cores with 2. 2GHz processor speed per core. For multitasking power the laptops come with 4GB DDR2 DIMM memory, expandable to 8GB. The wireless printer prints up to 26ppm in black, and up to 17 ppm in color. It also prints, copies and scans for convenience. The printer also has built-in memory card slots that support various card capacities and sizes.

The network will be designed to accommodate the client’s mobility needs and business/operational objectives. We have decided to implement a wireless LAN Architecture to provide the customer with maximum mobility. We have decided to use the Verizon FiOs Network which comes with a wireless router, and downloads up to 50 Mbps and uploads up to 20 Mbps for $139. 95 per month. All Verizon High Speed Internet packages include one account with eight additional sub-accounts, totaling nine accounts.

A wireless router is a wireless access point with several other useful functions added. The router converts the signals coming across the Internet connection into a wireless broadcast, and steers data in an intelligent way, eliminating a lot of the sluggishness found in typical peer-to-peer networks. (Networks that don’t have servers are peer-to-peer networks because each computer has equal ranking) Like wired broadband routers, wireless routers also support Internet connection sharing and include firewall technology for improved network security.

A key benefit of both wireless routers is scalability. Their strong built-in transceivers are designed to spread a wireless signal throughout the home. A general rule of thumb in home networking says that 802. 11b and 802. 11g WAPs and routers support a range of up to 300 feet, but obstructions in a home such as brick walls and metal frames can reduce the range of a Wi-Fi LAN by 25% or more. The router will be placed in an optimal location away from microwave ovens, 2. 4 GHz cordless phones and garage door openers which can all cause signal interference.

In densely populated areas, wireless signals from neighboring homes can sometimes cause signal interference. This happens when both households set conflicting communication channels. When configuring an 802. 11b or 802. 11g router, you can change the channel number used. The default administrator password and username for the router will be immediately changed. All Wi-Fi equipment supports some form of encryption, and we will be using the128-bit WEP Encryption by assigning a WEP passkey. The passkey should be unique and long.

For extra security we will be changing the default SSID or network name, which identifies the network. This should also be unique. Most wireless network routers contain the ability to filter devices based on their MAC address. By enabling MAC Address Filtering, this will allow the router to keep track of the MAC addresses of all devices that connect to it, and only allow connections from those devices. The MAC address is a unique identifier for networking hardware such as wireless network adapters.

The SSID broadcast feature will be disabled as well. Many wireless routers routinely transmit the WiFi network name (SSID) into open air. This roaming feature is unnecessary as it increases the likelihood someone will try to log in to the network. The two laptops and the wireless printer will all be assigned a static IP address. DHCP will be turned off from the router to prevent network attackers from easily obtaining a valid IP address from the network. The Verizon network router comes with a built-in firewall capacity.

Firewall programs can be very effective at keeping intruders out of the network and out of your computer. We will ensure that the router’s firewall is turned on, and for extra protection we will install and run personal firewall software on each computer connected to the router. Next the printer software will be installed on each computer and connectivity will be ensured. Implementation is complete once all nodes are connected to the router and functioning correctly.


Free Essays

Social Networking and College Athletes

Freedom of Speech in College Athletics Brent Schrotenboer argues that the reputation of colleges is more important than the views and opinions of a student-athlete that attends such colleges. Student-athletes participating on the women’s soccer team at San Diego State University were suspended for posting inappropriate pictures and statuses on a social networking site. They were warned by their coach that a punishment would be issued upon their continuance of posting such statuses about consuming alcoholic beverages and criticisms of the soccer program.

The students did not heed their coaches warning and were thus penalized for it. The student-athletes felt that the punishment violated their fundamental right of freedom of speech outlined in the Constitution. College administrators are desperately searching for a solution to this ongoing problem that allows anyone to access the postings of college students and athletes alike. Some colleges allow the discretion of college coaches to regulate their players’ social networking activities and others set regulations for all sports programs.

The total prevention of the use of social media by college athletes should not be implemented by college administrators because alternative solutions exist such as programs that aid coaches in controlling students’ social activities, social media is a valuable tool for student-athletes to connect with their fans and the world, and criticism is a fundamental right owned by any citizen of the United States.

As the issue of social networking in the college environment increases in difficulty, solutions to this debate have been researched, and one potential aid to coaches is the development of applications to help monitor student-athletes social media postings. Medcalf explains that Varsity Monitor is a firm that provides a computer application that allows schools to filter and identify problematic social media activity (“Policing”). Applications such as Varsity Monitor can greatly increase the power of coaches in regulating what their athletes post without encroaching on the right of freedom of speech.

These applications do not prevent the athletes from posting inappropriate statuses, yet they allow the coaches to filter the statuses and delete them if warranted. This does not take away the freedom of speech because once the posts are up anyone can see them, so the act of free speech is upheld. If the coaches do not want the statuses to be continued to be seen however, they have the ability to delete them at their own discretion.

The coaches should clearly include that the applications are being used in their code of conduct if one exists at the university or college so as to prevent discrepancies among players and coaches when the coaches use their application to delete a post. Social media is a very effective way for fans and peers of college athletes to connect with each other. It is also used to quickly convey news about the team or college from the players to the fans which is considered vitally important to the recipients of the news because they want to support their favorite team.

Bruce Feldman interviewed Matt Barkley, USC’s starting quarterback who frequently uses twitter, and he stated “It’s my own words, my own thoughts that are coming directly from me, they (the media) can’t twist your words, because that’s exactly what you wrote” (“Social-media”). The social networks allow the athletes to voice their own opinion that is not altered by the media because what they post is exactly in their own words and it is not relayed to the public by a separate news writer or analyst.

This is a valuable aspect of social networking to college athletes because it solidifies their right of freedom of speech, and it allows their true opinion to be relayed directly to their fans. This also means that student-athletes must take responsibility for their own posts, and be aware that a negative response from their fans and the public is a possible outcome in reaction to their posts. Criticism is an important factor included in the freedom of speech, and at times it can be very controversial.

College athletes must be aware of what they post and they must consider if they post criticism that it may be risky. College coaches around the nation agree that student-athletes can be immature, and it is their responsibility to guide their players in what they say and do when in the public light. Zain Motani writes that we acknowledge that athletic departments and universities need to protect their brand, but at what point does this monitoring become Big Brother like and overstep the boundaries of what is and is not okay? (“The Use of Social Media”).

Coaches should guide their players in what they say instead of over regulating their social networking policies in order to uphold the first amendment which includes the freedom of speech. Many colleges and universities agree that their reputations cannot be tainted under any circumstances and they will take any degree of action to prevent a scandal associated with their respected college. Many administrators have the opinion that the easiest way to prevent a scandal is to ban all social networking activity by student-athletes.

Another policy that is being enforced at universities is that the players are required to give their passwords to their coaches. These policies violate the freedom of speech because it completely prevents players from expressing their own opinions. In this regard, college athletes are just like any citizen of the United States, and preventing them from using social networking sites takes away their constitutional right. The ongoing debate between coaches and their student-athletes seems monumentally difficult to resolve.

Finding a solution that pleases both sides of the argument is a delicate procedure. New technologies should be researched that allow coaches and administrators to exercise their power of regulating what their athletes post without angering them. An application like Varsity Monitor can be implemented with improvements that give coaches the ability to monitor and regulate what their athletes post before they are submitted for the public to see unlike the present programs that only allow the deletion of already posted statuses and pictures.

However, the use of these applications must be aware to the athletes and explained in detail in order to prevent misunderstanding between the two parties. Coaches can include what applications they are using and how they are using them in their original code of conduct that is signed by both coach and athlete. This can entirely prevent the posting of inappropriate statuses and pictures by student-athletes for good.

Free Essays

Utilizing Online Social Networking Sites Paper

Utilizing Online Social Networking Sites Paper Class: BSHS/352 Technology is constantly expanding and making it easier and more convenient to communicate and network with individuals and various organizations that we may not otherwise of had the opportunity to connect with. One area of technology that is growing at a fast rate and offering individuals and businesses, rather it be their professional life or personal the opportunity to make lasting connections is social networking sites. Social networking has become an excellent tool for businesses and individuals to connect and share information that can prove vital to their business.

Sites like Facebook and LinkedIn are becoming popular and are an effective way to grow your business whether it is through networking with similar organizations and getting beneficial information from them or expanding your cliental by reaching out to those who may need or want your services. LinkedIn has become a vital tool used by the working professional, assisting them with making connections or linking up with other working professional to share what work and what doesn’t work as well as connecting them with local or online support groups or networking groups.

Members of LinkedIn are able to create a profile that gives a detail list of their educational background as well as their work experience. Users are able to browse the social networking site to view the profiles of other individuals, organizations, or companies within their field and follow the organization of choice and their postings. My ultimate dream is to create a nonprofit organization that is geared toward targeting at risk youth and their families.

The whole concept is to help the whole family and not just focus your attention on the youth that may be having emotional or behavioral issues but offer mental and emotional support for the entire family, implementing various programs and workshops that will assist the entire family in growing, working, and playing together. Networking sites like LinkedIn can prove to be vital as I take the steps necessary to make this dream a reality. As I was browsing through the site I came across a few groups in my local area that met up monthly for lunch to discuss the ideas and challenges of those looking to start a nonprofit.

I also took the time to search for companies or organizations that were geared toward working with and advocating for children. I was really quite excited to be able to look at their profile, view their web pages and doing so helped me to get some ideas and get my juices flowing. I have considered making connections with the various organizations I have seen on LinkedIn in hope that they could link me to information, people, and training opportunities that could possibly put me one step closer to my dream.

I am also interested in going to the next luncheon for nonprofit communicators in Raleigh just to get feedback regarding my idea and you never know someone at one of these luncheons could either help me get closer to making my dream a reality or can link me to an individual or organization who can. I have found sites such as LinkedIn can prove to be extremely beneficial in making lasting connections within the business community and it gives those with businesses the opportunity to link up with other businesses to get feedback, advice, and possibly connect you with someone who can help you take your organization or company to the next level.

This site also enables professionals to come together with the common ground of helping and motivating each other. Within the human service field this site can connect you to so many resources that can only assist in providing your clients with the ultimate experience. Having a site where human service workers from all fields and from all areas can come together online and share their experience, advice, and resources can prove to be helpful to the community as a whole.

LinkedIn not only connect likeminded people but it offers an opportunity to share information regarding training and workshops that could assist organizations in staying up to date with the latest software and/or regulations. Such training and workshops can keep your organization competitive and allows you to offer your clients the best possible service. The best way to keep any business or organization growing is to continue to gain knowledge in your particular field.

Always be willing and open to learning and growing, this is what the training and workshops are there for to assist businesses and organizations in improving their techniques and staying relevant and competitive. Sites such as LinkedIn can offer you the ability to gain knowledge and training from some very successful people. Human service workers who use online social networking sites such as LinkedIn can find being affiliated with professional groups and connections to offer more than just sharing experiences, advice, training, and connections to resources.

Another benefit to being a part of an online community such as LinkedIn is the ability to request referrals from the connections you met online. Users can also request sponsorships or recommendations from other users. Human service workers who are affiliated with sites such as LinkedIn may also be able to connect with local churches that could assist connecting the human service worker to the communities that need their assistance the most. Employers often look at profiles on these online social networking sites to assist them in finding employees as well.

Although social networking sites such as Facebook and LinkedIn are excellent tools to stay connected to various resources there are other technical tools that can be used to expand and maintain your connections. Smartphones have proven to be a vital tool to use as well with various applications directed at making the life of professionals easier. The goal of a human service worker is to effectively and efficiently assist the client in improving their lives and often times this requires connecting them to other resources.

Social networking sites such as LinkedIn can assist human service workers in making numerous connections all at the touch of a mouse to various resources and training opportunities. Having online support that provide advice, training, and encouragement, of the human service worker can assist them in helping their clients meet their goals. Reference: 2012. LinkedIn. com. Retrieved from http//www. linkedin. com/home? trk_tab_home_top

Free Essays

Social Network Essay

Social network essay Social networking can be a useful tool for keeping in touch with friends and family but when it is used to substitute it for actual face to face contact it can be a dangerous thing. You can’t lie; social networking is a very large part of our lives. In September 2011 Facebook registered 800 million users. Social networking can have good effects on people and help them out but it can be used inappropriately and can have very disastrous effects on people’ friends that you want s lives.

Social networking may have some cons but if used properly it can be a very useful device. When used properly it can help you stay in touch with people you wouldn’t normally be able to like friends or family overseas. Instead of having to call or write a letter you can just talk over the internet. It may not only be family or friends that you want to talk to, there might be someone that you like but don’t have the confidence to talk to them. You can build up your confidence over the internet and not worry about stumbling over your words.

One of the arguments that people who are against social networking is that it can reduce face to face contact but if you use it well then It can actually increase it. You can organise things very easily compared to other ways like over the phone. Things like Facebook can be very helpful if used right but that can be the problem. People may feel they are being social but online interaction is no substitute for face to face contact. “Facebook is a tool. I compare it to a car: you can drive to isolate yourself from others or you can drive to meet people.

If you use Facebook to increase face-to-face contact, it increases social capital. ” It can help people but only if you do the right thing. This can be a really bad thing about social networking, it can promote loneliness. People will feel as though they are being really social but really they are becoming lonelier. People will feels as though no one really knows who they are and what they are really like. It can make people even worse when they see a new photo album or post saying “best day eva” tagged with some friends. That can make people feel left out and not part of a group.

It can make people jealous of others and why they didn’t get invited. It isn’t always accidental when people get hurt though. Social networking can be good if it is used correctly but the problem is a lot of people don’t. People don’t always realise that posting something when you ten or fifteen can come back to hurt you when you are twenty-five. You could do something or have photos of you on Facebook when you were younger and then you are going for a job and they find it you could end up missing the job just because of that. It can give kids a much easier way to bully their peers.

In the schoolyard you can have teachers around to stop it but over the internet there isn’t someone there to stop it. You do have to be careful about who you are talking to because they might not always be who you think. You shouldn’t add people who you don’t know because you don’t know who they are and what kind of person they are. Social networking can be a good thing but it must be used correctly. Social networking is a good thing but it must be used correctly or else it becomes a very dangerous place for everyone. It can be a very useful and important device but it may be us who end up destroying it.

Free Essays

Network management and the changing milieu

A ‘network’ can be described as “a system used to link two or more computers.” [1]

There are network connections that are used in the process: (1) the physical connections, which pertain to the medium that are used in sharing files, programs, etc.; and (2) the logical connections, which pertain to the protocols used in sharing files, programs, etc.[2]  However, in order to share and open files, messages, programs, and/or devices, a network needs proper management for its three layers of the application software, network software, and network hardware to work accurately and efficiently.  This paper will revolve around network management, its importance to the society, the state of network management nowadays, and how information systems like networks can be managed more effectively in the future.

Network management is “the activities, methods, procedures, and tools that pertain to the operation, administration, maintenance, and provisioning of networked systems.”[3]  There are certain significant functions that are used in managing a particular network, and these should include each of the following: (1) controlling, (2) planning, (3) allocating, (4) deploying, (5) coordinating, and (6) monitoring.[4]

There can also be the use of some access methods (e.g., SNMP, CLIs, XML) as well as schemes (e.g., WBEM, CIM), which support the transition of certain mechanisms that are used in network management.  By using the term ‘mechanism’ we refer to the managing of the agents, synthetic monitoring, the logs of activity, as well as the real user monitoring.[5]  Yet Cisco Systems, Inc. has defined network management more specifically as “a service that employs a variety of tools, applications, and devices to assist human network managers in monitoring and maintaining networks.”[6]

Despite the reliability of connecting computer applications and programs nowadays, the functioning of these devices is also being influenced by the characteristics of other protocols, other connections, and other devices, which may not always be perfect.  There are crucial elements that go in between the processing of networking, which may hinder or delay the progression of the activity or service.  For this reason, it is very important that network management is strictly and sufficiently organized, maintained, planned, and monitored, especially that networks are not always perfectly controlled, and that there are reliable as well unreliable networks that influence the transmission of data given a specific environment.

Companies in the 21st century usually go for 99.9% availability when it comes to network management.[7]  As stated in the Encarta Encyclopedia, “Networks are subject to hacking, or illegal access, so shared files and resources must be protected.”[8]  Certain techniques may include data encryption and authentication schemes, especially when dealing with issues that include privacy and protection of rights.  Others bend more on the purpose of autopolling network devices or generating certain topology that generates improvement.

It is said that the three most important elements of networks should include having “the lowest latency, highest capacity, and maximum reliability despite intermittent features and limited bandwidth.”[9]  While data is reorganized and transformed into smaller frames, packets, and segments, there are certain significant factors that affect the transmission of the data: first is latency or the time span of delivery; second is packet loss inside the intermediate devices; third is retransmission that leads to delays; fourth and final is throughput or the amount of traffic within a network.[10]  For this, network management appears to be the critical key in making sure that the network functions well despite failures, attacks, and the inconsistencies that are mostly crucial in any type of society or network.

Nowadays, network management is set more on the use of certain protocols like the use of ‘Simple Network Management Protocol’ or SNMP, or the use of ‘Common Management Information Protocol’ or CMIP.[11]  Since the 1980s, when there was “tremendous expansion in the area of network deployment,”[12] and companies went into the trend of building and expanding their networks from different types of network technologies, organizations saw the need for an automated network management that could be functional in diverse situations and environments in certain occasions inside and outside the country.

The improved basic structure that has then been used was usually composed of a set of relationships that follow a specific paradigm: end stations or managed devices, which run the specific software, which alerts the staffs (through computers) whenever problems, inconsistencies, or emergencies arise.[13]  It may also include certain end poll stations that check other specific variables through automatic or user-initiated polling, and where certain ‘agents’—or managed devices—respond and store data, which the management staff of a network system produces through protocols.  The state of network management revolves in an architecture that links all the computers through a management entity that connects the rest of the agents with the use of a proxy server, in the management database of the device.

With all these, James McKeen insisted in his book entitled ‘Making IT Happen: Critical Issues in Managing Information Technology’ that there is a rapid, changing role in the IT milieu: the two forces of relentless business pressures and rapidly evolving technology landscape,[14] which both bring greater risks within a changing technology environment around the globe.  Thus, it is evident that information systems, such as networks, can be managed more effectively in the future by producing better, faster, more agile architectures and functions that can break through beyond these two forces of change.


“Chapter 6: Network Management Basics.” Internetworking Technology Handbook, no.1-58765-001-3 (2006). Database on-line. Available from Cisco Systems, Inc.

McKeen, James D. Making IT Happen: Critical Issues in Managing Information Technology. England: John Wiley & Sons Ltd, 2003.

“Network (computer systems).” Encarta Encyclopedia (2007): 1-2. Database on-line. Available from MSN Encarta.

“Network Management.” Wikipedia Online Encyclopedia (2008). Database on-line. Available from the Wikimedia Foundation, Inc. database.

 “Network Performance Management.” Wikipedia Online Encyclopedia (2008). Database on-line. Available from the Wikimedia Foundation, Inc. database.

[1] “Network (computer science),” Encarta Encyclopedia (2007) [database on-line]; available from MSN Encarta, p. 1of 2.
[2] Ibid.
[3] “Network Management,” Wikipedia Online Encyclopedia (2008) [database on-line]; available from the Wikimedia Foundation, Incorporated database.
[4] Ibid.
[5] Ibid.
[6] “Chapter 6: Network Management Basics,” Internetworking Technology Handbook (2006) [database on-line]; available from Cisco Systems, Inc, accession number 1-58765-001-3, p. 1 of 6.
[7] Internetworking Technology Handbook, 1.
[8] Encarta Encyclopedia, 2.
[9] “Network Performance Management,” Wikipedia Online Encyclopedia (2008) [database on-line]; available from the Wikimedia Foundation, Incorporated database.
[10] Ibid.
[11] Ibid, 2.
[12] Internetworking Technology Handbook, 1.
[13] Ibid.
[14] James D. McKeen, Making IT Happen: Critical Issues in Managing Information Technology (England: John Wiley & Sons Ltd, 2003), 1.

Free Essays

Wired and Wireless Network

Wireless Vs Wired Networks| There are two kinds of network technologies: * Wireless – communicates through radio waves * Wired – communicates through data cables (most commonly Ethernet-based)| Why choose a wireless network? | Wireless networks don’t use cables for connections, but rather they use radio waves, like cordless phones. The advantage of a wireless network is the mobility and freedom from the restriction of wires or a fixed connection.

The benefits of having a wireless network include: * Mobility and freedom – work anywhere * No restriction of wires or a fixed connection * Quick, effortless installation * No cables to buy * Save cabling time and hassle * Easy to expandAlso known as Wi-Fi, or Wireless Fidelity, wireless networks allow you to use your network devices anywhere in an office or home, even out on the patio. You can check your e-mail or surf the Internet on your laptop anywhere in your house. There is no need to drill holes in the wall and install Ethernet cables.

You can network anywhere – without wires. Outside your home, wireless networking is available in public “hotspots,” such as coffee shops, businesses, hotel rooms, and airports. This is perfect for those of you who do a lot of traveling. Learn more about hotspots… Linksys wireless routers are also equipped for wired connections – giving you the best of both worlds – connect wirelessly when you’d like to roam around your house, and connect wired when the utmost speed is important to you. For convenience and ease of use, wireless networking is the answer. Learn more about how wireless works… Why choose a wired network? | Wired networks have been around for decades. Wired networking technology found today is known as Ethernet. The data cables, known as Ethernet network cables or wired (CAT5) cables, connect computers and other devices that make up the networks. Wired networks are best when you need to move large amounts of data at high speeds, such as professional-quality multimedia. The benefits of having a wired network include: * Relatively low cost * Offers the highest performance possible * Fast speed – standard Ethernet cable up to 100Mbps. Faster speed – Gigabit Ethernet cable up to 1000Mbps. | omputer networks for the home and small business can be built using either wired or wireless technology. Wired Ethernet has been the traditional choice in homes, but Wi-Fiwireless technologies are gaining ground fast. Both wired and wireless can claim advantages over the other; both represent viable options for home and other local area networks (LANs). Below we compare wired and wireless networking in five key areas: * ease of installation * total cost * reliability * performance * security About Wired LANs

Wired LANs use Ethernet cables and networkadapters. Although two computers can be directly wired to each other using an Ethernet crossover cable, wired LANs generally also require central devices like hubs, switches, or routers to accommodate more computers. For dial-up connections to the Internet, the computer hosting the modem must run Internet Connection Sharing or similar software to share the connection with all other computers on the LAN. Broadband routers allow easier sharing of cable modem or DSL Internet connections, plus they often include built-in firewall support.

Installation Ethernet cables must be run from each computer to another computer or to the central device. It can be time-consuming and difficult to run cables under the floor or through walls, especially when computers sit in different rooms. Some newer homes are pre-wired with CAT5 cable, greatly simplifying the cabling process and minimizing unsightly cable runs. The correct cabling configuration for a wired LAN varies depending on the mix of devices, the type of Internet connection, and whether internal or external modems are used.

However, none of these options pose any more difficulty than, for example, wiring a home theater system. After hardware installation, the remaining steps in configuring either wired or wireless LANs do not differ much. Both rely on standard Internet Protocol and network operating systemconfiguration options. Laptops and other portable devices often enjoy greater mobility in wireless home network installations (at least for as long as their batteries allow). Cost Ethernet cables, hubs and switches are very inexpensive.

Some connection sharing software packages, like ICS, are free; some cost a nominal fee. Broadband routers cost more, but these are optional components of a wired LAN, and their higher cost is offset by the benefit of easier installation and built-in security features. Reliability Ethernet cables, hubs and switches are extremely reliable, mainly because manufacturers have been continually improving Ethernet technology over several decades. Loose cables likely remain the single most common and annoying source of failure in a wired network.

When installing a wired LAN or moving any of the components later, be sure to carefully check the cable connections. Broadband routers have also suffered from some reliability problems in the past. Unlike other Ethernet gear, these products are relatively new, multi-function devices. Broadband routers have matured over the past several years and their reliability has improved greatly. Performance Wired LANs offer superior performance. Traditional Ethernet connections offer only 10 Mbpsbandwidth, but 100 Mbps Fast Ethernet technology costs little more and is readily available.

Although 100 Mbps represents a theoretical maximum performance never really achieved in practice, Fast Ethernet should be sufficient for home file sharing, gaming, and high-speed Internet access for many years into the future. Wired LANs utilizing hubs can suffer performance slowdown if computers heavily utilize the network simultaneously. Use Ethernet switches instead of hubs to avoid this problem; a switch costs little more than a hub. Security For any wired LAN connected to the Internet, firewalls are the primary security consideration.

Wired Ethernet hubs and switches do not support firewalls. However, firewall software products like ZoneAlarm can be installed on the computers themselves. Broadband routers offer equivalent firewall capability built into the device, configurable through its own software. About Wireless LANs Popular WLAN technologies all follow one of the three main Wi-Fi communication standards. The benefits of wireless networking depend on the standard employed: * 802. 11b was the first standard to be widely used in WLANs. * The 802. 1a standard is faster but more expensive than 802. 11b; 802. 11a is more commonly found in business networks. * The newest standard, 802. 11g, attempts to combine the best of both 802. 11a and 802. 11b, though it too is more a more expensive home networking option. Installation Wi-Fi networks can be configured in two different ways: * “Ad hoc” mode allows wireless devices to communicate in peer-to-peer mode with each other. * “Infrastructure” mode allows wireless devices to communicate with a central node that in turn can communicate with wired nodes on that LAN.

Most LANs require infrastructure mode to access the Internet, a local printer, or other wired services, whereas ad hoc mode supports only basic file sharing between wireless devices. Both Wi-Fi modes require wireless network adapters, sometimes called WLAN cards. Infrastructure mode WLANs additionally require a central device called the access point. The access point must be installed in a central location where wireless radio signals can reach it with minimal interference. Although Wi-Fi signals typically reach 100 feet (30 m) or more, obstructions like walls can greatly reduce their range.

Cost Wireless gear costs somewhat more than the equivalent wired Ethernet products. At full retail prices, wireless adapters and access points may cost three or four times as much as Ethernet cable adapters and hubs/switches, respectively. 802. 11b products have dropped in price considerably with the release of 802. 11g, and obviously, bargain sales can be found if shoppers are persistent. Reliability Wireless LANs suffer a few more reliability problems than wired LANs, though perhaps not enough to be a significant concern. 802. 11b and 802. 1g wireless signals are subject to interference from other home applicances including microwave ovens, cordless telephones, and garage door openers. With careful installation, the likelihood of interference can be minimized. Wireless networking products, particularly those that implement 802. 11g, are comparatively new. As with any new technology, expect it will take time for these products to mature. Performance Wireless LANs using 802. 11b support a maximum theoretical bandwidth of 11 Mbps, roughly the same as that of old, traditional Ethernet. 02. 11a and 802. 11g WLANs support 54 Mbps, that is approximately one-half the bandwidth of Fast Ethernet. Furthermore, Wi-Fi performance is distance sensitive, meaning that maximum performance will degrade on computers farther away from the access point or other communication endpoint. As more wireless devices utilize the WLAN more heavily, performance degrades even further. Overall, the performance of 802. 11a and 802. 11g is sufficient for home Internet connection sharing and file sharing, but generally not sufficient for home LAN gaming.

The greater mobility of wireless LANs helps offset the performance disadvantage. Mobile computers do not need to be tied to an Ethernet cable and can roam freely within the WLAN range. However, many home computers are larger desktop models, and even mobile computers must sometimes be tied to an electrical cord and outlet for power. This undermines the mobility advantage of WLANs in many homes. Security In theory, wireless LANs are less secure than wired LANs, because wireless communication signals travel through the air and can easily be intercepted.

To prove their point, some engineers have promoted the practice of wardriving, that involves traveling through a residential area with Wi-Fi equipment scanning the airwaves for unprotected WLANs. On balance, though, the weaknesses of wireless security are more theoretical than practical. WLANs protect their data through the Wired Equivalent Privacy (WEP) encryption standard, that makes wireless communications reasonably as safe as wired ones in homes. No computer network is completely secure and homeowners should research this topic to ensure they are aware of and comfortable with the risks.

Important security considerations for homeowners tend to not be related to whether the network is wired or wireless but rather ensuring: * the home’s Internet firewall is properly configured * the family is familiar with the danger of Internet “spoof emails” and how to recognize them * the family is familiar with the concept of “spyware” and how to avoid it * babysitters, housekeepers and other visitors do not have unwanted access to the network Wired vs Wireless | Wired| Wireless|

Installation| moderate difficulty| easier, but beware interference| Cost| less| more| Reliability| high| reasonably high| Performance| very good| good| Security| reasonably good| reasonably good| Mobility| limited| outstanding| ad]There are two ways to connect a computer to a network: wired or wireless. Sometimes this will determine the kind of router you purchase, but fortunately today most offer both options. A wired connection requires an Ethernet cable be run between the router and your computer.

In a wireless connection, you use hardware in your computer to communicate with the router without that cable. Both have advantages and disadvantages so to help you pick the right one for you, here are 5 things to consider when deciding on a network connection. 1. Ease of Set-Up Wired connections are easier to set up. With most modern computers you can simply plug in the cable and get on the Net. Wireless requires configuring the router and at least one extra step on the computer’s side: searching for the correct network to connect to.

If you live in an apartment building in the city and go to connect to your network, you’ll probably see a dozen or more different possibilities. 2. Reliability and Speed Everybody who has used both wired and cordless home telephones knows how much more likely the cordless varieties are to pick up interference and experience problems of quality. The same can be true for wireless Internet. While hardware has improved over the years, other electrical devices can still potentially interfere with your Internet, in some cases causing disconnections and delays.

And like cordless phones, problems increase as you get farther away from the router. There are devices to fix such problems, but they can be costly and may require some trial and error. 3. Speed Wired is almost always faster than wireless, and never slower. This is due to the reliability issues mentioned above and to the technology itself, which simply hasn’t caught up to Ethernet-level quality. 4. Convenience Clearly wireless is more convenient on a day-to-day basis. Once it’s been set up, you can access the Internet from any computer in the vicinity of the router.

If you can run Ethernet cables throughout your house you can achieve a similar level of convenience while keeping the reliability and speed, but it’s a huge undertaking and may not even be possible if, for example, you rent an apartment. 5. Security This is arguably the most important of these points and the one too few give much thought. A wired network is fully contained. In order to connect to it, you must have physical access to the router. On the other hand, a wireless network is not contained. Your neighbors, people on the street, or those in the restaurant next door can all potentially find your network on their computers.

There are two reasons this should concern you. [ad#r]First, you don’t want people you don’t know using your Internet connection. It’ll be slower to you and any questionable actions they take online will be traced back to you, not to them. Second, it’s not difficult for a hacker to intercept data sent through an unsecured network. All of the banking, purchasing, and communication you do online could potentially be maliciously saved to a computer. You can imagine the possibilities for identity theft, credit card fraud, and so on.

Free Essays

The Positive Part Social Networking Web Sites.

THE POSITIVE PART Social networking Web sites are helping businesses advertise, thus social networking Web sites are benefiting businesses – economically. Social networking Web sites are helping education by allowing teachers and coaches to post club meeting times, school projects, and even homework on these sites. Social networking Web sites are enabling advancements in science and medicine. Job hunting Stay in touch with friends Positive causes/awareness THE NEGATIVE PART

The very nature of such sites encourages users to provide a certain amount of personal information. But when deciding how much information to reveal, people may not exercise the same amount of caution on a Website as they would when meeting someone in person. This happens because: * the Internet provides a sense of anonymity; * the lack of physical interaction provides a false sense of security * they tailor the information for their friends to read, forgetting that others may see it.

Sharing too much information on social networking sites can be problematic in two ways: firstly, it can reveal something about you that you’d rather your current or future employer or school administrator not know, and second, it can put your personal safety at risk. Another potential downside of social networking sites is that they allow others to know a person’s contact information, interests, habits, and whereabouts.

Consequences of sharing this information can range from the relatively harmless but annoying—such as an increase in spam—to the potentially deadly—such as stalking. Another great issue of concern with social networking web sites is that of child safety. Research has shown that almost three out of every four teenagers who use social networking web sites are at risk due to their lack of using online safety. Joly, Karine, 2007) A lot of the web sites do have an age requirement but it is easily bypassed by the lying about of one’s age. Even if they don’t lie about their age the average age requirement is around fifteen years old. Predators may target children, teens, and other unsuspecting persons online—sometimes posing to be someone else—and then slowly“groom” them, forming relationships with them and then eventually convincing them to meet in person.

Free Essays

Multiprotocol Label Switching Networks

IP networks were initially designed with network survivability in a decentralized networking as the central goal. Thus the Internet infrastructures and protocols were intended from the very beginning for this purpose. As the Internet is evolving into a general-purpose communications network, the new realities require the development of new Internet infrastructure to support real-time-sensitive and multimedia applications such as voice over IP and video conference calls (Smith & Collins, 2001).

Back in the mid to late 1990s, when most routers were predominantly based on software forwarding rather than hardware forwarding, a number of vendors devised proprietary mechanisms to switch packets far more efficiently than was possible with forwarding based entirely on hop-by-hop longest match IP address lookups. Various aspects of these proprietary mechanisms were effectively merged and developed by the MPLS working groups at the IETF and produced what we know today as MPLS (Edwards, Syngress, McCullough, & Lawson, 2000).

MPLS is a key component of the new Internet infrastructure and represents a fundamental extension to the original IP-based Internet with changes to the existing infrastructure (Wang, 2002).

Multiprotocol Label Switching (MPLS)

MPLS introduces connection orientation and packet switching in IP networks. IP datagrams are forwarded by MPLS routers along pre-established paths, based on a short label. This reduces the amount of routing computations, which are carried out only at the times of setting up new paths. MPLS allows introducing new traffic engineering techniques which apply for connection-oriented networks can be applied to MPLS networks. One of these techniques is dynamic routing.

Another important application for MPLS networks is the configuration of Virtual Private Networks (VPNs) over a public IP network. The benefit of MPLS for this application is that private IP addresses, which may be not unique, are separated from the world-wide valid public IP addresses used in the public IP network. The separation of addresses is realized by building MPLS tunnels through the public IP network. The MPLS protocol can also be run on ATM networks and frame relay networks. This simplifies the interworking between these networks and IP networks (Smith & Collins, 2001).

MPLS connections are well suited to the fast-forwarding (also called switching) of any type of network layer protocol (not just IP), hence the word multiprotocol in the name. it will be widely used for two main types of application:

First, it adds controllability of IP networks. As already noted, an IP network is much like a “free-for-all” highway without traffic control, to use the analogy of a highway system. All the traffic can be crammed onto the highway at once, and each router along the way tries its best to get the traffic through without any guarantee of succeeding, MPLS marks ‘lanes’ with labels for the IP highway, and each packet flow has to follow a predefined lane or path. Once the ‘lanes’ are marked, a set of traffic parameters can be associated with each lane to guarantee the service delivery. It reduces randomness and adds controllability to the IP network (Edwards et al., 2000).

Second, MPLS adds switching capability to the routing-based IP network. The traditional Internet structure has every router along the way examine the destination address inside a packet and determine the next hop. In a switched network, each switch routes the traffic from the input port to a predetermined output port without examining the contents of each packet. This is also called route once and switch many times, since the packet contents are examined only at the entry of the MPLS network to determine a proper ‘lane’ for the packet. The benefits of this change include speedup of network traffic and network scalability(Smith & Collins, 2001).

Summary and Conclusion

Label switching is something that has been significant interest from the Internet community, and significant effort has been made to define a protocol called Multiprotocol Label Switching (MPLS).

MPLS involves the attachment of a short label to a packet in from of the IP header. This effectively is like inserting a new layer between the IP layer and the underlying link layer of the OSI model. The label contains all the information that a router needs to forward a packet. The value of a label may be used to look up the next hop in the path and forward to the next router. The difference between this and standard IP routing is that the match is an exact one and is not a case of looking for the longest match (that is, the match with the longest subnet mask). This enables faster routing decisions within routers (Wang, 2002).

The expansion rates for Internet protocol (IP) interchange and users persist to be very remarkable. What once was a technology principally used within the territories of academe and leisure is now being utilized around the world for conventional commerce submissions, like e-commerce, Web-based industry in the development of the carrier system as service contributors around the world concentrate on optimization and benefit efficiency (Edwards et al., 2000).

In many ways, MPLS is as much of a traffic engineering protocol as it is a Quality of Service (QoS) protocol. It is somewhat analogous to the establishment of virtual circuits in ATM and can lead to similar QoS benefits. It helps to provide QoS by helping to better manage traffic. Whether it should be called traffic engineering protocol of QoS protocol hardly matters if the end results is better QoS (Wang, 2002).


Edwards, M. J., Syngress, R. F., McCullough, A., & Lawson, W. (2000). Building Cisco Remote Access Networks. Rockland, MA: Syngress.

Smith, C., & Collins, D. (2001). 3G Wireless Networks. New York: McGraw-Hill Professional.

Wang, H. H. (2002). Packet Broadband Network Handbook. New York: McGraw-Hill Professional.

Free Essays

Cisco Networking 1 Chapter 6.1.2 Ws

IT Essentials: PC Hardware and Software v4. 1 Chapter 6 Worksheet/Student 6. 1. 2 Worksheet: Research Laptops, Smartphones, and PDAs Print and complete this worksheet. In this worksheet, you will use the Internet, a newspaper, or a local store to gather information, and then enter the specifications for a laptop, smartphone, and PDA onto this worksheet. What type of equipment do you want? What features are important to you? For example, you may want a laptop that has an 80 GB hard drive and plays DVDs or has built-in wireless capability. You may need a smartphone with Internet access or a PDA that takes pictures.

Shop around, and in the table below list the features and cost for a laptop, smartphone, and PDA. Equipment Laptop Computer:MacBook Pro Features ? 2. 6GHz quad-core Intel Core i7 ? Turbo Boost up to 3. 6GHz ? 8GB 1600MHz memory 1 ? 512GB flash storage ? Intel HD Graphics 4000 ? NVIDIA GeForce GT 650M with 1GB of GDDR5 memory 2 ? Built-in battery (7 hours) Cost $2199. 0 Smartphone:Galaxy S III ? ? ? ? ? ? ? ? 2100 mAh Lithium Ion Battery Dimensions 5. 4” x 2. 8” x . 3” 4. 8″ (1280×720) HD Super AMOLED touchscreen 4. 7 oz 1. 5 GHz dual core processor 16GB or 32GB ROM / 2GB RAM.

Supports up to 64GB MicroSD card. A2DP, AVRCP, GAVDP, HFP 1. 5, PBAP, HSP, HID, GOEP, SDAP/SDP, OPP, SPP, PAN, Stereo Streaming, MAP, AVDTP, OBEX (CR) Andriod Market 4G LTE Internet GPS Navigation Wifi 8. 0 megapixel camera $549. 99 ? ? ? ? ? PDA:HP iPAQ 110 Classic PDA Batteries Lithium Ion (Li-Ion) 1200 mAh Standard Battery 1 x 4-pin Mini-phone Headphone 3. 5″ QVGA Transflective TFT Touchscreen 240 x 320 PDA $323. 70 Ports 1 x USB 2. 0 – Mini USB Display Screen Display Resolution Product Type Manufacturer FA980AA Part Number Manufacturer www. hp. com

Website Address Manufacturer Hewlett-Packard Product Model 110 Product Name iPAQ 110 Classic PDA Product Line Brand Name Standard Memory Memory Technology Package Contents iPAQ HP 64 MB SDRAM ? iPAQ 110 Classic PDA ? Mini USB Synchronization/Charge Cable ? Documentation ? Companion CD-ROM ? Standard Battery ? AC Adapter ? Power Cord ? Slip Case ? Stylus IEEE 802. 11b/g Wi-Fi Product Series 100 Flash Memory 256 MB Network Bluetooth 2. 0 Bluetooth Weight 3. 68 oz (Approximate) Height Width Depth Processor Operating System Limited Warranty 4. 6″ 2. 7″ 0. 5″ Marvell PXA310 624 MHz Windows Mobile 6 Classic 1 Year

Free Essays

Social Network and Dangerous New Form

Instagram can be a dangerous new form of social networking. Smart phone users now have an option to download an app called Instagram. Although it is the new, trendy thing to do, it can be an issue. The basics of Instagram are to post only pictures. Users can put a small bio about themselves, but it is nothing like the other social networks where users will post all sorts of pointless information. When a picture is posted, users can put a caption for it and the caption is often followed by things called ‘hashtags’.

Examples of these include #pretty, #somuchfun #beach, or whatever it is pertaining to the photo. In the search section, users can search for words or phrases that have been hashtagged. There are choices to “follow” other Instagram users, but the main issue is that of the user doesn’t want to be followed by somebody, they have no choice. On other social networks, there is an option to accept or decline followers, but on Instagram there is not. Instagram is generally used by people between the ages of 14 and 25, which makes it a lot worse that there is no way to keep away potentially dangerous users.

When somebody searches for a hashtag, every use of that hashtag by every Instagram user pops up. There is no need to be following a person to look and “like” their pictures. Although there is an option to set your Instagram profile as private, only a minority of people actually do it. I have and use my Instagram every day. I love it. I think it is great to be able to only post pictures and to only be able to see pictures that others have posted.

My main stream of pictures doesn’t get all crowded up with people posting pointless statuses about their life and annoying political references. I do have negative thoughts about it though. I hate it when strangers like or comment on pictures that I post. I am being followed by people I have never met and no nothing about and I cannot do anything about it. I am sure to only post pictures that don’t show where my location is or put any captions about there I live. I would hate to see some horrendous things happen to users of Instagram because of ignorant mistakes like that.

Free Essays

Formal Speech( Social Networks and Cyber Crime)

S P E E C H #03 Thank you Chair, As we know in the today’s world networks and communication has become a new dimension of our society. Almost everyone in our world is aware of internet and social networking sites on it. Now a day’s any rupture in internet can collapse the world economies and administrations. A large part of our population is rather prone to cyber crimes; yes the lovable internet is also a place to exercise easy piracy, frauds, hacking and other criminal activities.

Cyber-threats are without doubt a new security challenge. Like most countries, Finland is increasingly dependent on a secure and functioning cyber-space and therefore increasingly vulnerable to unexpected and rapidly-emerging cyber-attacks. That is why we aim to become a global forerunner in cyber-security. While this will be the first such national strategy of its kind, the overall approach builds on decades of co-operation and co-ordination in crisis preparation and management.

The guidelines for the new cyber-strategy were laid down in 2010 in the government’s broader Security Strategy for Society and the European Union’s Convention to counter cyber threats. At the moment, however, responsibility for cyber-security remains scattered between many different organisations and stakeholders, reflecting their specialist areas of expertise. This has slowed the creation of common objectives, with key decision-makers acting in relative isolation. Procedures and responsibilities during a nation-wide cyber-crisis have also yet to be defined with sufficient clarity.

One of the main tasks of the current process, therefore, is to assess the need for a new authority to co-ordinate the strategy at a political level, as well as organising responsibilities at the operational level. Many of the risks of cyber-attacks are shared between the governments and the private sectors. And since most of the critical infrastructure is owned by the private sectors, the job of identifying and managing cyber-risks must be done in partnership. The forthcoming strategy will respond to all of these challenges by comprehensively analysing cyber-threats and deciding on the best way forward.

Free Essays

The Bad Side of Social Network

The bad side of social networks Social network is been lately very popular in society. Because of this all the users wants to be aware of what the other person is posting. Social network is a bad influence for most of the people because sometimes it appears windows that you don’t want to see. Social networks has changed the way people interact. In many ways, has led to positive changes in the way people communicate and share information, however, it has a bad side, as well. Social networking can sometimes result in negative outcomes, some with long-term consequences.

It’s a waste of time because you don’t take advantage of your free time in some pages like games or Facebook, MySpace, Hi5, etc, while you can be reading a book or cleaning your room or whatever. You’are in diasplay to all the people, like in facebook you upload a photo of the place you are and everybody see’s where are you at. Many social networking sites regularly make changes that require you to update your settings in order to maintain your privacy, and frequently it is difficult to discover how enable settings for your appropriate level of privacy.

Related reading: The Other Side of Email

Because of this, many users do not realize how much private information they are allowing to become public by not re-evaluating settings every time the network makes a change. Tagging can also serve as an invasion of privacy. When social networking sites have a “tagging” option, unless you disable it, friends or acquaintances may be able to tag you in posts or photographs that reveal sensitive data. In other way it can be good to have facebook or other social network, but just for fun and reconect to old friends, like the friend in primary school that you never saw them again.

But most of the time social networks are bad because is a waste of time, it can cause an addiction, and maybe cause a lot of problems. In conclussion, while social networking has clearly demonstrable negative impacts, it is most likely here to stay. Deciding whether you or your children will use social networking is an individual choice. By using it responsibly and encouraging your children to do the same, you can harness the benefits of social networking while avoiding the drawbacks.