Computer Networks Essay

I actually.

Remember: This is just a sample from a fellow student. Your time is important. Let us write you an essay from scratch

Get essay help

Internet and layered process architecture: Q1. (5 points) In the split protocol structures the transportation layer efficiency includes blockage control and error restoration (e. g., retransmission). One suggested that functionality must be done strictly at the end points (i. e., on the hosts) without aid from the network. Do you agree? How come?

Elaborate displaying the design trade-offs. Answer: (5 points) Generally, error recovery (e. g., re-transmission) can be specific to application requirements. Some applications require 100% packet restoration, even with delays and jitters (such because TCP-based applications, http, file transfer protocol and telnet traffic).

Different applications can be tolerant to loss although less understanding to gaps and jitter, such as words applications. Re-transmissions and packet recovery improve the jitters and the delays and hence may not be desired for realtime or tone applications. Therefore it is not a good idea, in general, to feature error restoration at the network layer (that is unaware of application needs) and it is better to implement such operation at the transportation layer end-to-end.

In cases of lossy channels in the network (such as Times. 25 in the early social networking days, or perhaps wireless links) it may be desired to reduce the small amount error rates on all those links by simply including problem recovery by the end points of those links. [In standard, most links nowadays have got very low BER, and for wi-fi links the MAC (such as IEEE 802. 11) layer delivers Ack’ed delivery]. For blockage control, an identical argument may be given. That is certainly, congestion effect may be application specific and is better executed end-to-end.

Over-crowding notification, alternatively, may provide useful info to the end points to react appropriately. As losses inside the network can be due to blockage or other factors, a signal through the network towards the end point may help separate congestion errors from other errors. Only traffic jam errors should certainly trigger back off’ or perhaps rate minimize at the end points. So , network assistance in congestion warning announcement may help in a few scenarios. [extra: Consist of scenarios network assistance prevents synchronization effects of congestion control, e. g., RED, or perhaps may prevent/isolate misbehavior, elizabeth. g., WFQ. ]. Q2. (5 points) What edge does a circuit-switched network have got over a packetswitched network?

How could it set up such advantage? Answer: A circuit-switched network can guarantee a few end-to-end band width for the duration of a call. Most packet-switched systems today (including the Internet) cannot help to make any end-to-end guarantees pertaining to bandwidth.

Circuit-switched networks make use of admission control, and hold a outlet (in TDM it is done in the form of the assigned period slot every source that no different source can easily use). The allocated solutions are never exceeded. Q3. (10 points) Exactly what are the advantages and disadvantages of having a layered process architecture for the net? (mention at least a few advantages and 2 disadvantages) Answer: Are you pulling my leg that the difference in any of the levels does not affect the other levels? (support the answer/arguments with examples) Positive aspects: Allows an explicit composition to identify relationships between different pieces of the complex Net structure, by providing a guide model pertaining to discussion.

Provides a modular design and style that makes it possible for maintenance, updating/upgrading of protocols and implementations (by numerous vendors) at the various layers of the collection. Supports a flexible framework pertaining to future developments and inventions (such since mobile or sensor networks). Disadvantages: cost to do business of headers, redundancy of functions (sometimes not needed) [such as stability as the transport level and the link layer, or routing with the network coating and some hyperlink layer protocols (such since ATM)] It is accurate in many cases that the change in one particular layer will not affect the difference in the different layers, but is not always.

Types of change that did not impact the other tiers: change from FDDI to expression ring, to Ethernet on the MAC level. Examples of change that afflicted other levels: wireless or wired (performance of TCP and course-plotting degraded drastically). Introduction of 802.

11 for wi-fi and interim networks (a change in the physical and MAC layers), does have an effect on in a major way redirecting at the network layer plus the transport layers. In that case, lots of the protocols needed re-design.

Q4. (10 total points) Design and style parameters: To be able to examine performance with the Internet protocols a investigator needs to model some variables, such as volume of nodes inside the network, in addition to many additional parameters. a. Discuss 5 different primary parameters you might need to style in order to assess the performance of Internet protocols. [Elaborate around the definition of these kinds of parameters and the dynamics] b. Discuss 2 even more parameters pertaining to mobile cellular networks [these two parameters are generally not needed for the wired Internet] Solution: a. Targeted traffic model eventual and spatial (packet arrival processes, session/flow arrival procedures, spatial distribution of traffic (src-dst) set distribution through the topology), topology/connectivity model, client failure unit, membership characteristics (for multicast) spatio-temporal models. [Any reasonable 4 parameters happen to be ok, with 1 . a few points every parameter] b. Pertaining to mobile cellular networks there is a need to version mobility’ (spatio-temporal), and wi-fi channel dynamics/loss/bandwidth since it changes with time far more drastically than the wired Net (in which virtually the max band width of a channel/link is static) [Any 2 affordable parameters happen to be ok, with 2 factors per parameter] 2. Statistical multiplexing and queuing theory Be aware: You may want to makes use of the following equations: M/D/1: queuing delay; Ts is services time &? is hyperlink utilization M/D/1: average line length or perhaps buffer guests M/M/1: queuing delay, stream occupancy: Q5. (8 points) Consider two queuing devices, serving bouts with lengths that have dramatical distribution, plus the packet appearance process is usually Poisson. The first queuing system (system I) includes a single queue and just one server, thus the packet arrival rate is Times, and the storage space speed is usually Y. The 2nd queuing system (system II) has two queues and two web servers, and hence the packet appearance rate is definitely X/2, and the server speed is Y/2.

Derive a relation involving the delays in each of these devices. What conclusion can you help to make? Answer: (8 points) We use the M/M/1 queue (because the question declares Poisson entrance and exponentially distributed assistance time). Pertaining to the 1st system (I): Tq=Ts/(1-? )=1/M(1-? /M)=1/Y(1-X/Y), Intended for the second program (II): Tq=2/Y(1-X/Y)=2Tq (of system I) That is certainly, using 1′ queuing system performs a lot better than using 2′ queues with half of the appearance rate and half of the end result link capacity. Q6. (5 points) Within an Internet experiment it was noted that the queuing performance in the switches/routers was worse than expected.

1 designer recommended increasing the buffer size in the routers drastically to face up to any feasible burst of data. Argue to get or against this suggestion, and justify your situation. A6. Increasing the barrier size permits switches to store more packets (which may reduce loss).

However , it will not alleviate the congestion. In the event this was the only cure proposed, then we all expect the queues to develop, increasing the buffer occupancy, and elevating the holdups hindrances impediments. If the increase persists (due to not enough congestion control for example) the lines shall bear losses and extended gaps. Delays may well lead re-transmission timers to expire (for reliable protocols, such as TCP) leading to re-transmissions.

Also, the TTL benefit in the header of each bundle is reduced based on period (and get count). Therefore , many of the TTLs may expire leading to the discard of packets. Therefore , in general, only increasing the buffer sizes does not improve the queuing performance. Q7. (7 points) Describe the network design and style trade-off introduced by using record multiplexing and define and describe a metric that captures this kind of trade-off. A7. (7 factors: 3. your five for the web link between stat muxing and congestion and 3. your five for the trade off metric (network power) and its description).

Statistical multiplexing allows the network to admit flows with mixture capacity exceeding beyond the network capacity (even if momentarily). This leads to the advantages of buffering plus the store and forward’ model. Subsequently, queuing delays and make up can be experienced while the load around the network is usually increased. Two major design goals from the network is to provide optimum throughput (or goodput) with least (or min) wait.

However , these two goals will be conflicting. In order to increase the throughput, the blockage increases and so does the delay. In order to reduce the queuing holds off then we should reduce the load on the network and hence the goodput with the flows would decrease.

This can be the throughput-delay advantage in network design. 1 metric that captures the two measures is definitely the network power=Tput/Delay, as the Tput boosts, so does the network electricity, and when the delay reduces the network power raises. Q8. (8 points) Moves in the Internet differ widely in their characteristics. Someone suggested that in order to be fair to the various heterogeneous goes then we need the different moves to experience the same delay at the different queues. Argue to get or against this suggestion.

A8. (8 points: 4 points for the ratio and the link to the fluid flow model, 5 points to get the unfairnes/greed description) In order to provide the same delay for the different flows we should maintain the rate/capacity ratio constant (this is dependent on the smooth flow unit we launched in class). Hence, in the event the different goes arrive at several rates, then your capacity portion should echo such variance. The share leading to same delays could favor (i. e., set aside more ability to) moves with bigger rates on the expense of flows with low rates.

This strategy promotes greed in the network and cannot accomplish fairness, where the existence an excellent source of rate (large) flows in the network would adversely influence low rate (small) flows in the network by raising the overall delay experienced simply by all the flows. Q9. (12 total points) Consider a network that uses statistical multiplexing. The network has N’ number of ON/OFF sources, every sending for a price of R packets per second when ON. Each of the sources happen to be multiplexed through a single end result link. The capability of the outcome link can be M’.

A. (3 points) What is the condition about N, L and Meters in order to support this network? When the number of sources to be backed is improved from 3rd there’s r to 10R, there were two suggestions to change the network: Suggestion I actually is to duplicate the above system 10 times. That is, create 10 links, every with capability of M’ handling L sources.

Suggestion 2 is to change the link with another link of potential ’10 M’ B. (9 points) Which suggestion do you really support and why? [Argue supplying expressions pertaining to the delay/buffer performance of each system. Give both the pros and cons of each case] Answer:? =? A. (3 points) The conditions for any stable network are D. R.?< M, N.R >M, where? is the fraction of the time the sources are ON (on average) If N.R.? >M’, then simply this leads to continuous build up of the queue with no change of recovering from over-crowding (and depleting the queue), which will lead to volatile system.

B. (9 points) Write down the equations, M/D/1: queuing delay; Ts is usually service time &? is usually link usage M/D/1: common queue size or stream occupancy M/M/1: queuing wait, buffer occupancy: The barrier occupancy is determined by? only. If perhaps? is the same (i. elizabeth., the load for the queue machine is the same) then the barrier occupancy is definitely the same,? sama dengan?. Ts =?. N. R / Meters Increasing the bandwidth in the link to 10M means that we can find the same normal buffer occupancy in the two systems.

In system I we would will need 10 times the buffer size as in program II, therefore system 2 is helpful in that sense. (more sharing and statistical multiplexing) Additionally , the queuing delay will probably be decreased significantly (by an issue of 10) where Tq=Ts. f(? ) (6 items for the above argument) (3 points) However the a sexually transmitted disease deviation/fluctuation about the average inside the queue size will be higher since it can be shared by more volume of flows, and therefore the jitter will be comparatively higher. 3. Application level and related issues Q10. (5 points) (Stateful versus Stateless) Discuss one edge and one disadvantage of using a stateful’ protocol for applications.

Advantage: The protocol are now able to maintain state about (i. e., remembers) users choices (e. g., shopping personal preferences as in browser cookies), Drawback: when failing occurs the state of hawaii needs to be reconciled (more complexness and over head than stateless) [other correct and reasonable email address details are accepted] Q11. (5 point) (Web Caching) Identify how Net caching may reduce the wait in getting a requested object. Will Net caching reduce the delay for all those objects wanted by a user or for only some of the objects? How come?

Ans. Web caching can bring the desired content material closer towards the user, perhaps to the same LAN to which the user’s host is definitely connected. Web caching may reduce the hold off for all items, even things that are not cached, since caching reduces the traffic on links.

Q12. (10 points) Discuss three different architectures of the peer-to-peer applications. Give examples of actual applications for each and every architecture and discuss the benefits and disadvantages of every architecture. Ans. 1 . Centralized directory of resources/files, as in Napster.

Advantage is that search for solutions is simple with min over head (just ask the central server). The disadvantages are: single level of inability, performance logjam and goal of court action. 2 . Completely distributed, non-centralized architecture, such as Gnutella, in which all peers and sides form a flat’ overlay (without hierarchy). Advantages: strength to failure, no efficiency bottleneck and no target for lawsuit. Cons is that search is more engaged and incurs high expense with issue flooding. a few.

Hierarchical overlay, with some nodes acting as super nodes (or bunch heads), or nodes creating loose communities (sometimes known as loose structure, as in BitTorrent). Advantages, robust (no sole point of failure), prevents flooding to search for resources during queries. Disadvantages, needs to keep track of at least some nodes using the Tracker’ server. On the whole, this architecture attempts to mix the best of the 2 other architectures. Q13. (7. five points) Drive vs . Draw: A. Offer examples of a push protocol and a pull process B. Mention three factors one should consider when designing pull/push protocols, talk about how these kinds of factors might affect for you to decide as a process designer (give example cases to illustrate).

Answer: A. An example of a push protocol is: http. An example of a pull process: SMTP N. The elements affecting the performance of the pull/push protocol include (but are not limited to): 1 . access design: how often are these claims object cached and how frequently is it seen (example: a push mechanism for a quite popular video that is pushed closer to a large population that is going to frequently watch that, would be a lot better than a draw mechanism), 2 . delay: precisely what is the delay to obtain the thing, and several. object aspect: how often/soon does the data in the object expires (example: in a messfuhler network where the information sensed is constantly changing, but can be queried every now and then would be better not’ to enhance it, but to pull this when needed only).

Q14. (7. 5 points) We make reference to the problem of getting users to learn about one another, whether it is colleagues in a p2p network or perhaps senders and receivers in a multicast group, as the rendezvous problem. What are possible solutions to solve the rendezvous problem in p2p networks (discuss three different alternatives and compare/contrast all of them. Answer: The possible solutions for the rendezvous difficulty include: 1 ) Using a central server: advantages: simple to search, little connection overhead.

Disadvantages: single-point-of-failure (not robust), bottleneck, doesn’t size well installment payments on your Using a search technique for finding, perhaps using a variant of any flood (or scoped-flood) or perhaps expanding-ring search mechanism. Positive aspects: avoids single-point-of-failure and bottlenecks. Disadvantages: could possibly be complex, incurs high connection overhead and could incur delays during the search. 3. hybrid (or hierarchy): where several information (e. g., hints to potential bootstrap neighbours, or ideas to some resources) are stored at a centralized (or replicated) machine or super-nodes, then the genuine communication is peer-topeer.

Advantage: if designed carefully can avoid single-point-of-failure, bottlenecks, and achieve affordable overhead and delay. Drawback: need to build and maintain the hierarchy (can trigger expensive re-configuration control overhead in the event of highly active networks and unstable super-nodes).

Related essay

Category: Pc Apple Inc,

Topic: Computer, Essay Topics,

Words: 2927

Views: 360