Friday, November 29, 2019

Bismans Social Work Values The Moral core of Profession

Hypothesis In these modern days, acquired knowledge for professional purposes has replaced the core values and mission of social work in industrialized countries. The belief that the society has a moral obligation to cater for social needs of poor people has reduced considerably.Advertising We will write a custom essay sample on Bisman’s Social Work Values: The Moral core of Profession specifically for you for only $16.05 $11/page Learn More Goal To establish the reasons behind shifting priorities from social objectives to professional social work Recommendations As a result of professionalizing social work, there should be a will to improve conditions of poor people in the society. The need to eradicate extreme poverty should not be individualized or regarded as philanthropic. Instead, it should be regarded as a need to undertake social reforms. The reforms must involve people in all classes of society. There is need for social workers to spend m ore time in analyzing moral ambiguities. Social benefits should be evenly distributed in the society while skills and knowledge are improved through scientific means. In her article, Bisman says â€Å"if we wish to improve the conditions of the poor we must adopt scientific measures† (Bisman, 2004, p. 113). This will solve some of the social problems. Clear objectives ought to be developed to guide professionals of social sciences in maintaining basic moral values. It can be done by both individual and collective moral responsibilities and basic values. Other professionals in law and philosophy should be involved in building moral base. Consequences Consequences may include men and women losing their human dignity. Professionalizing social work increases the need for expertise and applied formal knowledge for social interests. It may also lead to loss of moral values through development of skills and knowledge without considering the resulting moral degradation. â€Å"Those who are financially able may buy their way out of collective responsibility, paying less into the pool of public funds yet benefiting more from public services† (Bisman, 2004, p. 117). This may lead to social unrest.Advertising Looking for essay on social sciences? Let's see if we can help you! Get your first paper with 15% OFF Learn More This essay on Bisman’s Social Work Values: The Moral core of Profession was written and submitted by user Julissa C. to help you with your own studies. You are free to use it for research and reference purposes in order to write your own paper; however, you must cite it accordingly. You can donate your paper here.

Monday, November 25, 2019

To play fairly means that we must always abide by the rules of the game, as written in the rule book Essays

To play fairly means that we must always abide by the rules of the game, as written in the rule book Essays To play fairly means that we must always abide by the rules of the game, as written in the rule book Essay To play fairly means that we must always abide by the rules of the game, as written in the rule book Essay Fair play is, the adherence to criteria of fairness implied by the idea of mutual quest for excellence'(Keating, J., Sportsmanship as a Moral Category, Ethics, Vol.75 (1964), pg 25-35 This report is intended to critically discuss the title statement, with the objective being to persuade the reader that a competitive game can be played fairly even when the rules are not strictly abided by. The following report will contain sufficient academic referencing to underpin the arguments, which have been clearly stated. Athletes, in modern times, are professionals, Paid for ones skill'(English dictionary, geddes et al, 1998, pg 170). This means like any other job they are paid to complete a task, in football it is to score more goals, cricket the aim is to score more runs and rugby a higher points score is the objective. In modern times sport has moved away from the inherited value that its not winning or losing but how you played the game because in a professional environment this is clearly a total fallacy. Using the same standards it could be argued that one may say of a surgeon it matters not whether the patient lives or dies but only how he makes the cut (Forest Evashevski, Sports Illustrated, 1957, pg 119) The rules, which aim to standardise games, are often open to interpretation, for instance in cricket the umpire must decide in a matter of seconds weather a ball, travelling at 100 miles per hour is definitely going to hit the stumps, dependant upon where the ball pitches, deviation and distance from the stumps, this rule is often not adhered to correctly and the incorrect decision is given frequently. These decisions can determine whether a team who are clearly superior to the opposition, may lose the game. It would not be appropriate to class the winning team a team that played unfairly to win, because the result was due to poor umpire decisions. It would also be totally unprofessional for a batsman to walk from the crease, if he had hit the ball and been caught out by the wicket keeper. The player is entitled to wait for the umpires decision and invariably it is a very tough decision, as the deviation of the ball from the bat is minimal. This has been illustrated recently in the Ashes test series between Australia and England, when Michael Vaughan didnt walk when Justin Langer caught him out. During an average F.A Premier league football match the referee will make numerous incorrect decisions, for example deliberate and accidental handball. Is the team, who have been awarded a penalty acting unfairly by accepting the penalty awarded, even if they agree that the handball was accidental. A competitive game is governed by sets of rules which cannot be broken, i.e if the whole ball crosses the goal line into the net a goal is awarded. This is a rule which is hard to referee, human error is part of the game yet not stated in the rule book. If a team were to score a legitimate goal which the referee rules out, later for video replays showing the ball clearly crossing the line, is the winning team instructed to replay the match because of an unfair advantage. Any situation which is officiated by humans will incur non consistent ruling, as a result of human error. Players will always take advantage of this, as ween in the quarter finals of the world cup. Diego Maradona scored the winning goal, but replays show that the ball was deliberately handled in projecting it over the England goalkeeper. Argentina went on to win the World Cup and will be remembered as the champions of 86. The only nation to remember the incident is England. Ole gunner Solskjear(Manchester United) was chasing Robert lee (Newcastle United) in the last minute of a game, which Manchester united were leading by a single goal. Realising that he wasnt going to get the ball fairly solskjear committed a deliberate foul. Newcastle were awarded a free kick and solskjear was sent off. In breaking the rules Manchester United won the game. Yet in the same game many other fouls were committed by both teams so the same rule had been broken a number of times, upholding the theory that if everyone is breaking the rules then the game is fair, winning is the sole purpose of a football match. To suggest that breaking rules is unfair cannot be argued convincingly as sometimes rules are not broken and still a team is deemed to have not have acted in a fair manner, Sheffield United Vs Arsenal F.A Cup. In this instance the ball was played over the touchline for an Arsenal player to receive treatment. The unspoken rule states that player throws the ball back to the opposition sporting behaviour. Arsenal disregarded this and from the resulting throw Arsenal scored. Even though no rule was broken Arsenal were still seen to be acting unfairly, therefore fairness cannot solely be associated with rules. Australia have the best facilities for coaching cricketers, the cricket is well funded at junior level and the coaching aids are more advanced than most other cricketing countries, this together with a favourable climate coalesce to ensure the Australians are superior to any other cricketing nation. It could be argued that this gives them an unfair advantage over other cricketing nations who are less affluent and therefore unable to supply the same level of facilities. It would be unfeasible to ever suggest that a game was perfectly fair between two teams in a competitive situation. This suggests that games begin unfairly so why should rules not be broken to even the contest. In 1978 F.I.F.A, the international football governing body, introduced an award known as the Fair Play award. This was intended to encourage teams to play the game in a manner, which displays respect, sportsmanship and adherence to rules. In the seven tournaments which have occurred since, only twice has the fair play award been won by the winning team. This statistic supports the old adage that nice guys finish last. The world cup is a tournament played by professional sportsmen, whose aim is to win the tournament. The question a coach must ask before the tournament begins is one of priority. Which award brings a nation more respect, winning the world cup or achieving second place but winning the F.I.F.A Fair Play Award?. The question is rhetorical. The theory can be supported by asking anyone who won the fair play award at the previous world cup chances are the answer will be dont know. In contrast asking the question Which nation are the current World Champions? most people are likely to answer Brazil. Therefore it is tolerated that the best teams dont always abide by the rules. Conclusion The fundamental motive, which drives professional athletes to participate in activities, which are outside the laws of the game, is money, the second most influential aspect associated with foul play is that high level competitive sport is very competitive and the standard is much higher, this drives athletes to discover ways in which they can gain an advantage.

Thursday, November 21, 2019

Developing an interactive timeline Literature review

Developing an interactive timeline - Literature review Example Arthur Conan Doyle collection that would help in attracting more visitors towards their website thereby increase the profitability of the origination. The youth and adults of the present generations prefer to remain away from books and academic materials as involving in various social networking activities has become common among them (Palla & et. al., 2013).   It serves difficult to attract these sections of the people towards reading books for which it has become essential for such organizations to implement an effective interactive timeline in their website that would help in attracting them and therefore spent much time in their websites. Interactive media in the timeline may be in the form of attractive texts, graphics, video, animation or even audio that would attract the people to visit that particular website and spent more time in them (Grigoreanu & et. al., 2009). It is a common trend among all people that they always prefer something that is entertaining and attractive i n nature, rather than the static contents in the website that creates a dull interface. According to Liu & et. al. (2002), it is inherently necessary for the organizations and website designers to develop an interactive timeline that would allow the people of every generation irrespective of their ages to access the particular website without any difficulty. There are various people with deformities such as color blindness and old people for whom the letters and font size should be kept clear and large.

Wednesday, November 20, 2019

Separation of Church and State Essay Example | Topics and Well Written Essays - 2000 words

Separation of Church and State - Essay Example This strategy is applied by those who are as eager to separate church and state, as those who seek to integrate them more tightly. One of the other primary issues that is raised in this debate is the rather practical one as to whether or not church and state are really separated at all. It is suggested that the notions of political liberalism, democracy, and the founding principles of modern states are based implicitly on moral codes and mores derived from religious institutions. Thus, religion and government are not inseparable a priori. The second type of argument given in this vein offers that the increase in the number and percentage of religious practices which exist here in the United States, mandates a level of management if not expressly establishment from Federal, State and local governments. The number of individuals who claim a religious affiliation that is neither Christian, Jewish, nor non-affiliated has risen from 7% to 20% in the past 30 years (Walker 1). While it migh t be the case that such diversity is to be lauded, the legal intricacies that must be navigated to ensure that these various religious practices have the "free exercise" guaranteed to them by the Constitution while simultaneously maintaining supposed "neutrality" on the relative merits of any individual religion (or non-religion for that matter) has become fraught with inconsistencies and difficulties. In this paper I will briefly highlight and discuss some of these difficulties, ideological and practical, philosophical and historical, that have made this issue such an integral part of the national debate for decades. Thomas Jefferson, a founding father and author of the Statute of Virginia for Religious Freedom, was indeed so partial to this document, that the drafting of this document along with his drafting of the Declaration of Independence and the founding of the University of Virginia, were the only three accomplishments he wished to have listed on his epitaph (Owen 496). The document itself is divided into three sections; the first section lays out the incoherence and troubles that compulsory adherence, or support of a religion would create. While Jefferson and other founding fathers were perhaps committed to disestablishment and free exercise, very few of them were "neutral" on the topic of religion altogether. Even from the text of this legal statute, religiosity, if not explicitly religion is evident in the nature and language of the text as can be seen from the beginning of the statute: "Whereas Almighty God hath created the mind free; that all attempts to influence it by temporal puni shments or burthens [sic], or by civil incapacitations, tend only to beget habits of hypocrisy and meanness, and are a departure from the plan of the Holy author of our religion" (Nancy 13). Thomas Jefferson was undeniably "a believer," with all of the connotations and implications that that phrase implies. Thus, when we consider what modern or contemporary concepts are part and parcel of the phrase "separation of church and state" our language today differs in a much more secular direction than Jefferson's "wall" might initially have entailed. Another formative document that reveals the early history and potential mindset of some of the founding framers' view of Church and its role in the state derive from an early Treaty signed

Monday, November 18, 2019

Climate Change in the Context of Kuehne + Nagel Inc Coursework

Climate Change in the Context of Kuehne + Nagel Inc - Coursework Example From this paper it is clear that the trends in climatic changes are worsening with the increase in the occurrence of the unpredictable extreme events. Hence, the activities of Kuehne + Nagel Inc. are extremely affected by the negative change. Alongside, the adverse impacts trails of opportunities exist to market the firm due to its stability, and this increases the client bases. With the practical implementation of the recommendations, Kuehne + Nagel Inc. will overcome the inevitable catastrophes presented by the weather condition.   Ã‚  This essay discusses that  the current climate trends depict a long-term increasing inclination of the average temperature of the air. Precipitation is also in a dynamic pattern. However, it varies in a complicated manner. Climatologists predict that the trends will significantly pick up the pace in the future. A severely damaging concern caused by an elevation of temperature rates is the continually rising levels of the sea. From the year 1860, the level has increased by 0.2 meters as affirmed by satellite information. Scientists project that the temperature at the end of the 21st-century ranges between 1.0 to 3.7 degrees Celsius.  Additionally, the alteration in the conditions of climate may result in changes in duration, intensity, frequency, timing and spatial coverage of climate and weather extremes. These in turn can modify future climatic situations.

Saturday, November 16, 2019

E Commerce And The Importance Of Encryption Computer Science Essay

E Commerce And The Importance Of Encryption Computer Science Essay Web-commerce has grown into one of the fastest-growing area of industry in the past two years. Billions of dollars have passed hands in the process and each entrepreneur wants a slice of the dough. To make this possible, data encryption plays a very central role in ensuring customers that paying for anything online is secure. E-commerce relies on encryption to secure data transmission by controlling data access and protect information on the internet and in the end improve consumer confidence. Encryption is the encoding of data using an algorithm such that it is incomprehensible to anyone in the event that the data transmission is intercepted, unless the key is known to enable file decryption. By implementing encryption, integrity is maintained while digital authentication is enforced, thus, allowing both customers and sellers to verify the identity of the other party, a concept fundamental to secure online credit card transactions. The reliability of an e-commerce website may be negatively impacted if theft of customer information occurs, especially risky since 90% of all online payments are dealt by credit cards. 4. Important of Encryption Cryptography is a method of mathematically encoding used to transform messages in to an unreadable format in an effort to maintain confidentiality of data. Cryptography comprises a family of technologies that include the following: Encryption transforms data into some unreadable form to ensure privacy. Decryption is reverse of encryption; it transforms encrypted data back into original, intelligible form. Authentication identifies an entity such as an individual, a machine on the network or an organization. Digital signatures blind a document to the possessor of a particular key and are the digital equivalent of paper signatures. Signature verification is the inverse of a digital signature; it verifies that a particular signature is valid. Application In order to enable secure online transaction, data encryption plays four important functions: Digital authentication which allows both the customers and the merchant to be sure that they are dealing with whom, the other party claims to be. These is absolutely necessary before sending credit card details to the seller and also allow sellers to verify that the customer is the real owner of the credit card being used. Integrity ensures that the messages received re not changed during transmission by any third party. Non-repudiation prevents customers or merchants denying they ever received or sent a particular message or order. In the event that information is intercepted, encryption ensures privacy that prevents third parties from reading and or using the information to their own advantage. Two methods of encryption network traffic on the web are SSL and S-HTTP. Secure Socket Layer (SSL) and its successor Transport Layer security (TLS) enable client and server computers to manage encryption and decryption activities as they communicate with each other during a secure web session. Secure Hypertext Transfer Protocol (S-HTTP) is another protocol used for encrypting data flowing over the internet, but it is limited to individual messages, whereas SSL and TLS are designed to establish a secure connection between two computers. The capability to generate secure sessions is built into Internet client browser software and servers, and occurs automatically with little user intervention. The client and the server negotiate what key and what level of security to use. Once a secure session is established between the client and the server, all messages in that session are encrypted. There are two alternative methods of encryption: symmetric key encryption and public key encryption. In symmetric key encryption, the sender and the receiver establish a secure Internet session by creating a single encryption key and sending it to the receiver so both the sender and receiver share the same key. The strength of the encryption key is measured by its nit length. Today a typical key will be 128 bits long (a string of 128 binary digits). The problem with all symmetric encryption schemes is that the key itself must be shared somehow among the senders and receivers, which exposes the key to outsiders who might just be able to intercept and decrypt the key, A more secure form of encryption called public key encryption uses two keys: one shared (or public) and one totally private, as shown in Figure. The keys are mathematically related so that data encrypted with one key can be decrypted using only the other key. To send and receive messages, communicators first create separate pairs of private and public keys. The public key is kept in a directory and the private key must be kept secret. The sender encrypts a message with the recipientà ¢Ãƒ ¢Ã¢â‚¬Å¡Ã‚ ¬Ãƒ ¢Ã¢â‚¬Å¾Ã‚ ¢s public key. On receiving the message the recipient uses his or her private key to decrypt it. Digital signatures and digital certificates further help with authentication. Benefits/Advantages Most corporations implement multiple forms of security by using hardware solutions such as routers and firewalls. These devices protect essential data by keeping external threats out of the network. Unfortunately, burglars will employ numerous attacks, specifically targeted at your information. When attackers find a way to enter your first line of defense, data encryption steps up and helps to ensure that your secrets cant be viewed. Encryption has changed significantly over the years, going from a military solution to widespread public use. Whether its hardware or software-based, this method is fast, easy to use and most important, secure. Here some of the key benefits this solution offers: Power: The best in data encryption is based on global standards, able to mitigate potential corruption without flaw. Many solutions are large enough to ensure that an entire organization is in full compliance with security policies. Data encryption allows a corporation to achieve military-level security with easy and affordable solutions. Flexibility: Data encryption can protect your sensitive information whether its stored on a desktop or laptop computer, a PDA, removable storage media, an email server or even the corporate network. This allows you to securely access important data from the office, on the road or at home. If the device is lost or stolen, the information will be protected by the data encryption mechanism. Transparency: It wouldnt be a good idea to employ any security measure that negatively impacts your business. An efficient data encryption solution enables your business to flow at a normal pace, silently securing crucial data in the background. Some of the best options are those running effectively without the user even being aware. There are many benefits of data encryption as this solution provides solid protection in the event of a security breach. Not only does it offer peace of mind, it also frees up resources normally used by your perimeter defenses. Every security measure you set in place is important yet inefficient if confidential data itself is not protected. Limitations Encryption is often oversold as the solution to all security problems or to threats that it does not address. Unfortunately, encryption offers no such protection. Encryption does nothing to protect against many common methods of attack including those that exploit bad default settings or vulnerabilities in network protocols or software even encryption software. In general, methods other than encryption are needed to keep out intruders. Secure Computing Corporations Sidewinder system defuses the forty-two bombs (security vulnerabilities) in Cheswick and Bellovins book, Firewalls and Network Security (Addison Wesley, 1994), without making use of any encryption. Conclusion

Wednesday, November 13, 2019

A Clean Well-Lighted Place :: A Clean Well-Lighted Place Essays

A Clean Well-Lighted Place Today in class we talked about plot in relation to "A & P" by John Updike. I had always thought of plot as just being the sequence of events, but after our reading assignment I realize that there is much more to it. I’d never thought of looking for plot in things like patterns. My reaction to "A & P" is mixed because I disagree with the main character being a hero (as Updike intended). While reading the story I thought that the girls who came into the store were merely looking for attention. I feel this way because the girls were prancing around in their bathing suits, which was probably a big deal in 1961, and the fact that ‘Queenie’ kept her money between her breasts shows that she was obviously trying to provoke a reaction. Other than the fact that one of the girls blushed when asked to leave I don’t think they were embarrassed and I don’t think the main character was trying to be particularly heroic. I gathered from all the sexual description tha t he was only interested in the girls physically. I also think that he just hated his job at the A & P because it was boring, since he always refers to the customers as a type of farm animal, and was just looking for an excuse to quit. What better excuse to quit than one that might make him look good to some cute girls? He would be through with his boring job and might score a date. We also talked about point of new in relation to "Why I Live at the P.O." by Eudora Welty today. I’ve never read anything where I really didn’t trust the narrator like in this story. I though the story was confusing because I could never figure out who was telling the truth. Sister seemed to have a very slanted view on things and thought that everyone was ‘out to get her’. Since the story was told from her (an unreliable narrator) point of view it gave me a feeling of turmoil like I have when I experience an argument in my own home. In that way Welty achieved her goal of making the reader feel involved in the story. I guess that Welty explained why Sister lives at the P.O., but I don’t understand why she thought anyone would care. A Clean Well-Lighted Place :: A Clean Well-Lighted Place Essays A Clean Well-Lighted Place Today in class we talked about plot in relation to "A & P" by John Updike. I had always thought of plot as just being the sequence of events, but after our reading assignment I realize that there is much more to it. I’d never thought of looking for plot in things like patterns. My reaction to "A & P" is mixed because I disagree with the main character being a hero (as Updike intended). While reading the story I thought that the girls who came into the store were merely looking for attention. I feel this way because the girls were prancing around in their bathing suits, which was probably a big deal in 1961, and the fact that ‘Queenie’ kept her money between her breasts shows that she was obviously trying to provoke a reaction. Other than the fact that one of the girls blushed when asked to leave I don’t think they were embarrassed and I don’t think the main character was trying to be particularly heroic. I gathered from all the sexual description tha t he was only interested in the girls physically. I also think that he just hated his job at the A & P because it was boring, since he always refers to the customers as a type of farm animal, and was just looking for an excuse to quit. What better excuse to quit than one that might make him look good to some cute girls? He would be through with his boring job and might score a date. We also talked about point of new in relation to "Why I Live at the P.O." by Eudora Welty today. I’ve never read anything where I really didn’t trust the narrator like in this story. I though the story was confusing because I could never figure out who was telling the truth. Sister seemed to have a very slanted view on things and thought that everyone was ‘out to get her’. Since the story was told from her (an unreliable narrator) point of view it gave me a feeling of turmoil like I have when I experience an argument in my own home. In that way Welty achieved her goal of making the reader feel involved in the story. I guess that Welty explained why Sister lives at the P.O., but I don’t understand why she thought anyone would care.

Monday, November 11, 2019

George W. Bush and Darth Vader Essay

More often than not when comparing two characters, whether one be real and the other fictional, we come up with both similarities and differences between them. Seldom however do these similarities and differences between a fictional character and a person from the real world become so glaring that such comparison consequentially provide a substantial realization. One of such seldom case is the comparison of George W. Bush and Darth Vader. This paper will venture on comparing both characters and at the end provide a realization of such comparison. The two mentioned personalities need no introduction hearing these names will immediately give us an image of them in our heads. George W. Bush is the current president of the United States. Darth Vader on the other hand is the popular character in the Star Wars movies. On first instance it may seem that a comparison between these two personalities is absurd. This paper however will show us that such comparison is not without realization. It is submitted that on rare occasions seemingly trivial things such as comparison of Bush and Vader can in fact provide us with a realization far better than any other concept we have. To outline this paper will proceed on first stressing the similarities of both characters, then proceed with its differences and eventually state the realizations made through the conclusion Their Similarities Leaders The most obvious similarity between Vader and Bush is that they are both leaders. Leaders in their own respect Bush lead a nation and Vader leads an army. More worth noting in this similarity is not only the fact that both are leaders but the fact that both have the same way of leading. Vader and Bush exercise an attitude of strength in leadership that they do not think of casualties they however have the end in mind. Like the war in Iraq started by George W. Bush he was fearless in proceeding with such war even if he knew what it would cost. His ratings of popularity went down and the economy of his nation went down with it. Whenever one comes to war inevitably lives are put on the line. Bush was not hesitant in outing young military soldier’s lives on the line in order for him to achieve his goal and win his war on Iraq. The same ideology is shared by Darth Vader. Darth Vader always has a goal or a mission to accomplish. The ruthless use of his army to achieve his directives are clearly gleaned within his personality. In this type of leading we find the similarity between George W. Bush and Darth Vader. They both seem to share the ideology that making sacrifices in order to achieve a goal is necessary. They also have utter disregard for the cost and the consequences as long as they achieve this goal this is their way of leading. This is the kind of leading we see in the Star Wars Movies by Darth Vader and the kind of leading we see in America by George W. Bush. It is therefore submitted that because they have the same kind of ideology of leadership the result of their leadership hypothetically are the same. The end of Darth Vader has already been seen this type of leadership resulted in failure. The present administration of Bush has not yet ended but the seeming similarity of leadership ideology with Darth Vader can provide us with a reasonable conclusion that the administration of Bush will end in failure. The Saga Journal as proof of Vader’s leadership provides: â€Å"George Lucas has succeeded in creating one of the greatest cautionary tales for the aspiring leader in his portrayal of Darth Vader’s devastating reign of terror. Darth Vader embodies traits that make most contemporary leadership scholars cringe. † (Cited in: Michelle Drum, The Saga Journal) Perceived as Villains Another similarity between these two personalities is that many of us perceive them as villains. Villains in their own right majority of the people see both George W. Bush and Darth Vader as villains. Darth Vader very much like George W. Bush were not always perceived as Villains. They were first considered as heroes. George W. Bush could not have been president if the people did not think of him as a hero. He won the election two times. This is only proof that before George W. Bush was perceived as a villain he was considered as a hero. The same is true for Darth Vader. Darth Vader was not immediately Darth Vader before his transition to such character. He was the young and promising Anakin Skywalker his views were moral and he had a good sense of justice. This however changed because of the circumstances as provided in the movie. This only means that sometimes the famous quote in the batman movie might be true sometimes we see ourselves a hero long enough to become the villain. This is the similar circumstance of both Bush and Vader. In one of the articles of the National News it provided the perception of Bush being a villain as it said: â€Å"According to an Associated Press-AOL News poll, President Bush is both the number one villain and the number one hero of 2006. † (cited in: Two Sided Coin for Bush: Villain and Hero By Cathy Gill) Powerful The final similarity this paper will provide from all the similarities of these two characters is their power. There is no doubt that both George W. Bush and Darth Vader are very much powerful in every sense of the word. George W. Bush being the President of the United States of America and Darth Vader being the leader of the imperial army. Vader being perceived as powerful in an article in USA Today as it said â€Å"Not only is Vader powerful, he’s sexy, says David Prowse, who appears as Vader in the first three films and has made thousands of appearances as Vader in costume. † (Cited in: Breathing Life into Vader by Mike Snider) Their Differences Their Rise to Power Though it is granted that both George Bush and Darth Vader are powerful they however differ in the manner of their rise to power. Darth Vader used pure brute force in order to be the leader of the imperial army. He had to lean to the dark side in order to achieve this goal. George Bush on the other hand rose into power through the mandate of the American people. Bush rose into power because of his will and the will of the people. Darth Vader on the other hand had to search and conquer this power by himself. This difference in their rise of power gives us an insight on how they held this power. Darth Vader could hold his power until he wished to abandon it. Bush on the other hand is bound by the limits of mandate given to him. He can only have the power of being a chief executive according to the period of time given to him. This gives us an insight on why Bush tries his best to stay in power while Vader needlessly and calmly enjoys his power. This is because Vader’s power is not bound by any limits. Vader Quick on His Feet, Bush Not so Much Darth Vader even as a youngster known to be Anakin Skywalker has always been quick on his feet. His reflexes during his time were comparable to no one. Bush on the other hand has admitted that he might not be as quick on his feet than most people. This is reflected on the way Darth Vader and President Bush makes decisions. Darth Vader is ruthless and quick on making decisions. President Bush has apparently a considerable amount of thinking time before he can make a decision. This spells a very different effect of their leadership. Darth Vader being quick on his feet can easily command his subordinates and they follow him without question. The delay on the decision making however of President Bush spells a different story because of such his subordinates may not follow him right away and might even question his decisions. Bush Democracy, Vader Dictatorship The most important difference between these two personalities however lie in the fact that Bush leads through a democratic structure while Vader in every sense of the word is a dictator. This brings us to the question which kind of structure is more effective is it the democratic structure or the dictatorship? Dictatorship brings obedience, order and an unquestionable authority. Democracy however gives freedom for every individual. It is submitted that both structures have their advantages and disadvantages. It is further submitted however that ruling a democratic structure like the task of President Bush is much more difficult than ruling in a dictatorship like Darth Vader. In an article in the LA Times President Bush advocated this democracy as the article said: â€Å"President Bush made good Thursday on his inaugural vow to push for democracy around the world. † (Cited in: Bush Democracy Vows May Take Time to Bear Fruit by: Sonni Efron) Conclusion The similarities and the differences between the two personalities having been discussed we now proceed to the realizations this paper has to offer. George Bush and Darth Vader are two very different personalities. In fact one lives in the real world while one finds existence in a fictional movie. They are both leaders, they are both powerful and they are both perceived by many as leaders. On the other hand they differ on their rise to power, quickness in decisions and the structure where they lead. The most important thing we have to realize out of this comparison is the simple fact that Darth Vader’s story has already been told while George Bush’s story is still unraveling. This only means that we can learn from what happened to Darth Vader and necessarily imbue it with the unraveling of the story of George Bush. This will give us a reasonable conclusion on how George Bush’s story will end. The relevance of knowing how George Bush’s story will end is that if we are aware of the end then we can prepare for this end that we foresee. Works Cited †¢ Michelle Drum, The Saga Journal †¢ Two Sided Coin for Bush: Villain and Hero By Cathy Gill, December 29, 2006 †¢ Breathing Life into Vader by Mike Snider, April 22, 2005 †¢ Bush Democracy Vows May Take Time to Bear Fruit by: Sonni Efron, February 25, 2005

Saturday, November 9, 2019

Examine how modern parallel computers are subject to multiple instructions with multiple data types The WritePass Journal

Examine how modern parallel computers are subject to multiple instructions with multiple data types Introduction Examine how modern parallel computers are subject to multiple instructions with multiple data types IntroductionGraphics Processing Units Conclusion BibliographyRelated Introduction Parallel computing is known to be the act of concurrently using several computational resources such as CPUs to resolve IT problems (Knowledge Base, 2010, Reschke, 2004). These problems are broken into separate entities/ instructions to be executed and solved simultaneously by multiple CPUs (Barney, 2010). Modern parallel computers are subject to multiple instructions with multiple data types. They engage in the act of decomposing the domain as a manner of dividing the workload. â€Å"Master nodes implicitly synchronize the computation and communication among processes and high level languages are used† (Karniadakis et al, 2003, p.61). However modern parallel computer architecture today is becoming increasingly complicated and users are considering a transformation of general purpose CPUs to more specialist processors of heterogeneous architectural nature (Brito Alves et al, 2009). In this piece of writing, I will put into perspective a critical engagement of graphics processing units (GPUs), which are highly efficient at the manipulation of computer graphics and are used to process significant amounts of data simultaneously. They play their roles effectively making them even more efficient than all-purpose CPUs for solving algorithmic problems. GPUs give a whole new meaning to parallel computing today, due to their dedicated functions. I will focus on this case through critical analysis, argumentation and engagement, to give a comprehensive understanding of modern parallel computing. Graphics Processing Units A Graphics Processing Unit is a multi-core processor that was introduced to the community of scientific computing on the 31st of August 1999 (Brito Alves et al, 2009; Nvidia Corporation, 2011).   The ‘processor’ is the very basic component of processing, which executes instructions that refer to the functions aimed at various devices. The grouping of processors is known as a ‘multiprocessor’ (Paolini, 2009). An individual GPU distinctly contains hundreds of these core processors giving systems the access to several cores at the same time (Brito Alves et al, 2009). Modern GPUs have revolutionised from machines which simply rendered services to immensely parallel all-purpose processors. â€Å"Recently, they exceeded 1 TeraFLOPS 7, outrunning the computational power of commodity CPUs by two orders of magnitude† (Diamantaras et al, 2010, p.83). GPUs nowadays aid concurrent ï ¬â€šoating-point computations inclusive of its shaders and programming pipelines. Therefore the functionalities of GPUs have enhanced to be more applicable than how it was before (Offerman, 2010). On a normal processor, it is the control flow which gains the position of prominence. This denotes how the algorithms process data and variables. A modern GPU on the other hand, provides a stream processing model which helps execute the traditional concurrent calculations utilized in â€Å"High Performance Computing (HPC), industrial, finance, engineering programs and in High performance technical computing (HPTC)† (Offerman, 2010,p.32-33). GPU manufacturers however had in fact failed to detect the opportunity not until the consumers began to manipulate those new capabilities. The manufacturers began to elongate their current product lines to add GPGPU (General-purpose computing on graphics processing units) solutions merely after HPC programs were used in game consoles and graphics adapters (Offerman, 2010). For instance, according to Offerman (2010), â€Å"some users deployed a stack of PlayStation 3 systems to do their parallel calculations. Today, IBM oï ¬â‚¬ers the Cell processor that is speciï ¬ cally designed for this game console as a parallel computing blade† (p.33). The processing models of GPGPUs also known as the general purpose GPUs are massively parallel, but they heavily rely on â€Å"off-chip video memory† (Halfhill, 2008, p.3) in order to run on big sets of data. Distinct threads need interacting with one another via this off-chip memory. As the frequency of memory access increase, the performance tends to get limited (Halfhill, 2008). Those who manufactured graphics processors were slow at adopting the GPGPU trends, judging by the sales made on the high end systems for HPC and HPTC (Offerman, 2010).   Offerman (2010) states, â€Å"The double precision ï ¬â€šoating-point operations have been introduced over the last years, but performance in that area is still lacking. The same goes for access to memory.† (p.34). Regardless of these disadvantages, the nVidia and the ATI presently provide a product line which is targeted at GPGPU programmes (Offerman, 2010). As for ATI the product portfolio consist of Stream products whereas for nVidia it consists of Tesla cards â€Å"based on their GeForce 8 GPUs† (Offerman, 2010, p.34) which could be unfolded by making use of the Compute Unified Device Architecture (CUDA) which will be discussed further on in the essay. Computing on GPGPU obits around significant data structures and matrixes â€Å"where super-fast and in parallel relatively small computations are performed on the individual elements† (Offerman, 2010, p.33). This is the reason why a graphics processor has an even greater local memory compared to a traditional CPU. Therefore it makes a GPU particularly suitable for significant parallel applications today (Offerman, 2010). A GPU uses an important ‘Single Program, Multiple Data’ (SPMD) architecture for the purpose of specialising on intensely parallel calculations. A heavy amount of the data processing is done through the devotion of transistors instead of caching data (Alerstam et al, 2008). GPUs today are highly data parallel processors which are utilized to give substantially high â€Å"floating point arithmetic throughput† (Alerstam et al, 2008, p. 060504-1) for issues meant to be resolved using the SPMD model. â€Å"On a GPU, the SPMD model works by launching thousands of threads running the same program called the kernel working on different data.   The ability of the GPU to rapidly switch between threads in combination with the high number of threads ensures the hardware is busy at all times† (Alerstam et al, 2008, p. 060504-1). This capability efficiently conceals memory latency, and the performance of GPUs will also be improved by combining with multiple levels of h igh bandwidth memory, accessible in the latest GPUs (Alerstam et al, 2008). â€Å"Nvidia revolutionized the GPGPU and accelerated the computing world in 2006-2007 by introducing its new massively parallel â€Å"CUDA† architecture. The CUDA architecture consists of 100s of processor cores that operate together to crunch through the data set in the application† (Nvidia Coropration, 2011, p.1). The CUDA GPU programming framework from Nvidia enables the growth of parallel applications through the elongation of C, which is known as â€Å"C for CUDA† (Diamantaras et al, 2010). Nvidia’s CUDA is a software platform for a massive degree of high performance parallel computing on any firm’s powerful GPUs. (Halfhill, 2008) The CUDA is a model which has the ability to scale parallel programming. â€Å"The CUDA programming model has the SPMD software style, in which a programmer writes a program for one thread that is instanced and executed by many threads in parallel on the multiple processors of the GPU† (Patterson et al, 2009, p.A-5). This CUDA model regards graphics devices as discrete co-processors to the CPU. CUDA programs as mentioned before â€Å"are based on the C programming language with certain extensions to utilize the parallelism of the GPU. These extensions also provide very fast implementations of standard mathematical functions such as trigonometric functions, ï ¬â€šoating point divisions, logarithms, etc.† (Alerstam et al, 2008, p.060504-2) Kernel functions which are basically C functions carried out in N parallel thr eads initiate the calculations on GPUs. In a semantic manner, the threads are formed in 1-2-3- proportional sets of up to threads of 512 known as ‘blocks’.   Every block is planned to run independently of each other on a multiprocessor. These blocks are simultaneously or sequentially executed in any order based on the resources of the system. However, this scalable notion comes at the cost of limiting communication amongst the threads (Diamantaras et al, 2010). For multiple threads to run simultaneously, a type of architecture known as the ‘Single Instruction, Multiple Threads’ (SIMT) is employed by the multiprocessors. The SIMT units in the multiprocessors develop planned activities and carries out sets of 32 parallel threads in regular succession. The efficient levels could be maximised if the identical instruction path is executed by every thread. The access of memory from the multiple threads could be joined together into one act of transacting memory, given that the successive threads obtain data from the very segment of memory (Diamantaras et al, 2010).   Diamantaras, Duch and Iliadi (2010) argue further, â€Å"Following such specific access patterns can dramatically improve the memory utilization and is essential for optimizing the performance of an application.† (p.83). Regardless of such optimal results, the CUDA framework is in fact meant for applications with a high concentration of arithmetic memory (Diaman taras et al, 2010). The CUDA model like the GPGPU model is massively parallel. However, it separates the sets of data into relatively small compact masses which are stored in on-chip memories. Several thread processors are then allowed to share every mass. The local storage of data cuts down the necessity of obtaining off-chip memory, which improves the performance. From time to time threads do need to obtain off-chip memory, for instance, to load off-chip data needed into the local memory. Off-chip memory accesses in the CUDA normally do not delay the thread processors. Rather, the delayed threads go into a queue of inactive nature and are substituted for another thread which would be available for execution.   As soon as the delayed data of the threads become obtainable, the threads enter into other queues which signal that they are ready to go. Bands of threads alternatively execute in a round-robin style, making sure that every thread gets to be executed on time without stalling the other threads (Halfhill, 2008). A prominent characteristic of a modern CUDA model is that programmers do not write threaded code in a clear concise manner. The manager of hardware threads manages the threading almost mechanically (Halfhill, 2008). â€Å"Automatic thread management is vital when multithreading scales to thousands of threads- as with Nvidia’s GeForce 8 GPUs, which can manage as many as 12,288 concurrent threads† (Halfhill, 2008, p.3). Even though the threads are light in weight, meaning that every thread runs on relatively small pieces of data, these threads are in fact fully developed. All threads have their own stacks, register files, program counters and local memories. Every thread processor has 32 bit wide 1024 registers enforced in static random access memory (SRAM) rather than latches. The GPUs preserve the threads which are not active and regenerate them once they reach their active form again. As Halfhill (2008) states, â€Å"Instructions from multiple threads can share a thre ad processor’s instruction pipeline at the same time, and the processors can switch their attention among these threads in a single clock cycle. All this run-time thread management is transparent to the programmer† (p.3) By taking away the load of managing the threads in an explicit manner, Nvivia makes the programming model more simplified. It removes an entire class of all likely bugs. Theoretically, the CUDA model eliminates the possibility of having deadlocks amongst the threads where deadlocks are said to be the occurrence of a blockage between many threads which prevents the threads from influencing and controlling the data. This creates repulsion where no single thread may be allowed to continue. The risk of having deadlocks is that they could lie in and wait without any detection in well behaved codes for decades (Halfhill, 2008).   CUDA could eliminate any deadlock, regardless of the number of threads running.   An application programming interface (API) named ‘syncthreads’ supplies clear synchronization of barriers. â€Å"Calling syncthreads at the outer level of a section of code invokes a compiler-intrinsic function that translates into a single instruction for the GPUâ €  (Halfhill, 2008, p.4). This instruction prevents threads from running on data which other threads are making use of. The point at which graphics processing intersects with parallel computing comes a modern prototype for graphics called ‘visual computing’. It plays a role of replacing broad segments of the â€Å"traditional sequential hardware graphics pipeline model† (Patterson et al, 2009, p.A-4) with geometric programming components, pixel and vertex systems. Patterson and Hennessy (2009) argue, â€Å"Visual computing in a modern GPU combines graphics processing and parallel computing in novel ways that permit new graphic algorithms to be implemented, and open the doors to entirely new parallel processing applications on pervasive high-performance GPUs.† (Patterson et al, 2009, p.A-4). Even though GPUs are considered to be the most parallel and the most potent processors in an average computer, they are arguably not the only processors in existence. CPUs have become multicore and in the near future would turn into manycore. They are considered to be primary successive processor companions and tend to compliment the hugely parallel manycore GPUs. These dual typed processors would incorporate heterogeneous multiprocessors (Patterson et al, 2009). GPUs have evolved into scalable parallel processors. The modern GPUs have further developed (function wise) from Video Graphics Arrays (VGA) controllers of constrained capabilities to programme centric parallel processors. The revolution has continued from the change of API based graphics pipelines to integrated programme centric components and by also devising a more programmable and less specialised hardware pipeline stage. In the end, it seemed sensible to unify distinct programmable pipelines to a merged array of several processors which were programmable. In the â€Å"GeForce 8-series generation of GPUs† (Patterson et al, 2009, p.A-5), the processing of pixel, vertex and geometry operate on the very same processor type.   This fusion enables impressive scalability. The entire system is greatened by more processor cores as process functions can make use of the total processor array (Patterson et al, 2009). The processor array is now made with fewer processors on the side of the spectrum as every function can be operated on the very same processor (Patterson et al, 2009, p.A5). However, a lesson that could be learnt from GPUs and graphics processing software is that an API does not disclose concurrency to programmers in a direct manner (Asanovic et al, 2006). â€Å"OpenGL, for instance, allows the programmer to describe a set of â€Å"vertex shader† operations in Cg (a specialized language for describing such operations) that are applied to every polygon in the scene without having to consider how many hardware fragment processors or vertex processors are available in the hardware implementation of the GPU† (Asanovic et al, 2006, p.13) The uniformity and scalability of the arrays brings modern programming models for GPUs. Solving non graphics issues is made possible by the significant amounts of floating-point power embedded in the processor arrays of the GPUs (Patterson et al, 2009). As Patterson and Hennessy (2009) say â€Å"Given the large degree of parallelism and the range of scalability of the processor array for graphics applications, the programming model for more general computing must express the massive parallelism directly but allow for scalable execution† (p.A-5). The vanguard of GPU hardware provides several floating points units running simultaneously on Single Instruction Multiple Data (SIMD) vectors. They too run on scalar data types. Therefore a GPU could also carry out scalar functions concurrently, providing heterogeneous parallelism (Fritz, 2009). â€Å"Various generations of Intel Pentiums and Power PCs only feature up to three 4-way SIMD vector processing units.† (Fritz, 2009, p.2). What this signifies is that a GPU could provide SIMD parallelism in both a single component and even across a battalion of components; alternatively just the element wise parallelism is exploited by SIMD CPUs (Fritz, 2009). It is believed by Intel that (many core) processors support tens and thousands of multiple threads. Following a series of tests with the hyper threading and dual core technologies, those who manufacture CPUs have now undeniably entered the (multi core) era. In the not so distant future, all-purpose processors would contain not just tens and hundreds but thousands of cores (Offerman, 2010). However, according to nVidia, such processors do exist in the modern times; graphics processors contain tens and hundreds of cores and support thousands of multiple threads. GPUs are presently being separated as chips on graphics cards and motherboards. They are gradually being used by programmers who code applications (Offerman, 2010). â€Å"For specific problems they have found mappings onto these graphics engines that result in speedups by two orders of magnitude† (Offerman, 2010, p.1). These graphics processor manufacturers have become fully aware of this opportunity and are making efforts at devising their merchandise to access not just graphics processors (Offerman, 2010). From the point of view of a programmer, a CPU offers multi thread models enabling many control-flow instructions, whereas a GPU offers stream processing models which puts â€Å"large performance penalty on control flow changes..Currently, new programming languages and extensions to current languages are developed, supporting both explicit and implicit parallelism† (Offerman, 2010, p.1). All-purpose processing on a GPU is commonly known as â€Å"Stream Computing†. They stress on parallel computing which results in high performance. As Paolini (2009) quotes â€Å"Beyond simple multithreaded programming, stream computing represents a logical extreme, where a massive number of threads work concurrently toward a common goal† (p.49). However, even though a GPU consists of multiple core processors, they are mostly not all-purpose CPU cores. Therefore the cores become limited in susceptibility.   For instance, the modern NVidia GPU consists of several multiprocessors where each processor contains several SIMD stream processors. However, the memory structure in the architecture seems to be complex. Multiple memory types in the GPUs are categorised according to their scope. â€Å"Registers serve individual processors; shared memory, constant cache, and texture cache serve multiprocessors; and global device memory serves all cores† (Brito Alves et al, 20 09 p.785; Richardson and Gray, 2008). A reduced latency is allowed in the processor level by the memory so that the threads in the same block could communicate with each other whereas accessing the global device memory seems to have an increased latency which can be obtained from the CPU. â€Å"The problem must be highly parallel so that the program can break it into enough threads to keep the individual processors busy† (Wolfe, 2008, p.785). GPU architecture today has great potential in scientific computing and has the ability to provide effective parallel solutions for linear systems (Brito Alves et al, 2009). Conclusion Based on the facts and arguments mentioned above, it can be concluded that GPUs are in fact highly parallel structures. They make use of specialist architectures to execute extreme parallel calculations. Great emphasis was placed on the CUDA as it plays a vital role in the functioning of GPUs; The CUDA being a parallel computing architecture is the computing engine in a GPU. Professor Dongarra (2011), the Director of the Innovative Computing Laboratory of The University of Tennessee, states that GPUs have evolved to the point where many real-world applications are easily implemented on them and run significantly faster than on multi-core systems. Future computing architectures will be hybrid systems with parallel-core GPUs working in tandem with multi-core CPUs. (p.1). â€Å"Graphics processors deliver their highest performance at successive, relatively simple, massively parallel operations on large matrixes, using as few as possible control ï ¬â€šow instructions† (Offerman, 2010, p.34). Modern GPUs today have embedded architecture which enables the GPUs to emphasize on the execution of multiple concurrent threads.   This unique GPGPU approach known to solving computational problems gives GPUs the better edge to carrying out parallel computing in a more effective and comprehensive manner, making it one of the most parallel structures in the computer world today. Bibliography Alerstam, E., Svensson, T., Andersson-Engels, S. (2008) Parallel Computing with Graphics processing units for high-speed Monte Carlo simulation of phonton migration ONLINE: atomic.physics.lu.se/fileadmin/atomfysik/Biophotonics/Publications/Alerstam2008_JBOLetters.pdf [Accessed 28 March 2011] Asanovic, K.,  Bodik, R., Catanzaro, C.B.,   Gebis,  J.J., Husbands, P., Keutzer, K., Patterson, D.A.,  Plishker, W.L., Shalf, J.,  Williams, S.W., Yelick, K.A. (2006)   The Landscape of Parallel Computing Research: A View from Berkeley, Electrical Engineering and Computer Sciences University of California at Berkeley scribd.com/doc/52168004/7/Computer-Graphics-and-Games [Accessed 31 March 2011] Barney, B. (2010) Introduction to Parallel Computing, Lawrence Livermore National Laboratory ONLINE: https://computing.llnl.gov/tutorials/parallel_comp/ [Accessed 30 March 2011] Brito Alves, R.M., Oller Nascimento, C.A., Biscaia Jr. E.C. (2009) 10th International Symposium on process systems engineering, Computer-Aided chemical engineering, 27 ONLINE: http://books.google.co.uk/books?id=o5WVnm6RjosCpg=PA783dq=modern+parallel+computinghl=enei=WXyQTfftOoPJhAe1qIC8Dgsa=Xoi=book_resultct=resultresnum=1ved=0CDoQ6AEwAA#v=snippetq=Modern%20computing%20architectures%20are%20becoming%20more%20complex%20and%20the%20parallel%20computing%20f=false [Accessed 31 March 2011]   Diamantaras, K., Duch,W., Lliadis, L.S. (2010) Artificial Neural Networks- ICANN 2010, Part III, Springer-Verlag Berlin Heiderlberg Copyright ONLINE: http://books.google.co.uk/books?id=MZZwacoqtywCpg=PA83dq=graphics+processing+modern+parallel+computinghl=enei=S6-RTb6yBsXLhAekn9yRDwsa=Xoi=book_resultct=resultresnum=7ved=0CFEQ6AEwBg#v=onepageq=TeraFLOPSf=false   [Accessed 29 March 2011] Dongarra, J. (2011) Director of the Innovative Computing Laboratory, The University of Tennessee Nvidia Corporation (2011) ONLINE: nvidia.com/object/GPU_Computing.html [Accessed 4 April 2011] Fritz, N. (2009) SIMD Code Generation in Data-Parallel Programming. ONLINE: http://books.google.co.uk/books?id=hfrHzojrT-wCpg=PA1dq=graphics+processing+modern+parallel+computinghl=enei=S6-RTb6yBsXLhAekn9yRDwsa=Xoi=book_resultct=resultresnum=4ved=0CEQQ6AEwAw#v=onepageq=graphics%20processing%20modern%20parallel%20computingf=true   [Accessed 1 April 2011] Halfhill, T.R. (2008) Parallel Processing with CUDA, Nvidia’s High-Performance Computing Platform Uses Massive Multithreading. Microprocessor: The Insider’s Guide to Microprocessor Hardware. ONLINE: hh.se/download/18.70cf2e49129168da0158000123243/3+Parallel+Processing+with+CUDA.pdf [Accessed 29 March 2011] Karniadakis, G., Kirby, R.M. (2003) Parallel Scientific Computing in C++ and MPI A Seamless Approach to Parallel Algorithms and Their Implementation, Cambridge University Press ONLINE:  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   http://books.google.co.uk/books?id=KctfgAHqtl0Cpg=PA61dq=modern+parallel+computinghl=enei=WXyQTfftOoPJhAe1qIC8Dgsa=Xoi=book_resultct=resultresnum=5ved=0CFMQ6AEwBA#v=onepageq=modern%20parallel%20computingf=false  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚  Ã‚   [Accessed 1 April 2011] Knowledge Base (2010) What are parallel computing, grid computing and supercomputing? University Information Technology Services, Indiana University ONLINE: http://kb.iu.edu/data/angf.html [Accessed 28 March 2011] Nvidia Corporation (2011) ONLINE: nvidia.com/object/GPU_Computing.html [Accessed 1 April 2011] Offerman, A. (2010) Modern Commodity Hardware for Parallel Computing, and Opportunities for Artificial Intelligence, Leiden University ONLINE: offerman.com/GPGPU/AO-psychology_thesis_report.pdf [Accessed 29 March 2011] Paolini. A.L. (2009) A real-time super resolution implementation using modern graphics processing units, University of Delaware ONLINE: http://books.google.co.uk/books?id=ZdC0bzbe95MCpg=PA49dq=graphics+processing+modern+parallel+computinghl=enei=S6-RTb6yBsXLhAekn9yRDwsa=Xoi=book_resultct=resultresnum=1ved=0CDQQ6AEwAA#v=onepageqf=false [Accessed 30 March 2011] Patterson, D.A., Hennessy, J.L. (2009) Computer Organization and Design, The Hardware / Software Interface, 4th Edition ONLINE: http://books.google.co.uk/books?id=3b63x-0P3_UCpg=SL1-PA4dq=graphics+processing+modern+parallel+computinghl=enei=S6-RTb6yBsXLhAekn9yRDwsa=Xoi=book_resultct=resultresnum=2ved=0CDkQ6AEwAQ#v=onepageq=graphics%20processing%20modern%20parallel%20computingf=false [Accessed 29 March 2011] Reschke, J. (2004) Parallel Computing (Presentation)

Wednesday, November 6, 2019

KRIK KRAK essays

KRIK KRAK essays For this assignment I decided that I would just write a brief overview of one of the stories in Edwidge Danticat s book Krik? Krak!. The story that I have chosen to talk about is Between the Pool and the Gardenias. I choose this one to discuss because I thought the circumstance between the lady and the child was very weird and intrigued me to look into it more in depth. So I am going to talk about the series of events in this story and my thoughts on the young women in the story. I am mainly going to focus on questions 2 and 3 in the study questions for Chapter 5 of Edwindge Danticats book, Krik? Krak!. In this story I believe that the main reason that she takes the child is because she is lonely and wants to become close to someone. She also has had a couple miscarriages before and this has affected her greatly and caused her much suffering and mourning over the years. When she took in the child this made her dream about all of the thoughts and emotions that would have taken place if she had been able to conceive her children. She had been missing out of all the parts of parenthood that came with having a child and this baby that she picks up makes her feel more whole inside. The baby makes her life on this planet feel like she has a purpose for a short while and that is why she takes the baby into her house as one of her own. Now maybe this wouldnt be considered crazy everywhere, but what if the baby was already dead and you tried to do this then you would probably get taken into an insane asylum. This is the main reason why I believe that she is crazy because you usually dont see women picking up dead babies off the road and taking them home to care for them. Sure she has had tough luck bearing a child, but when she pretended that the baby was alive and breathing when she knew what was really wrong with the child was just ridiculous. It got really bad when she took the baby to tow...

Monday, November 4, 2019

Geology Essay Example | Topics and Well Written Essays - 250 words - 5

Geology - Essay Example It is in these mountains that the described rock was found (‘Washington Geologic Newsletter’ 56) According to further research, there is evidence suggesting that uplifting of the Cascade Mountains that occurred in the Columbian river, which is denoted as the ancestral Columbia river exhibited a coincidence that saw the formation of a canyon through cutting. In the years that followed, fluid deposition and intracanyon flows accounted for the existence of basalt in the river channel. Such basalt is the basic material that formed the volcanic rocks similar to the type presented in the image. The latest event in the Columbia River basalt was the deposition of the saddle mountain basalt. Saddle Mountains have been described as containing high silica content, and of noticeably thin nature compared to other basalts of the Columbian river (70). The nature of appearance is the result if extensive compression as well as that of the ensuing extensional events that followed as the deposited basalt

Saturday, November 2, 2019

Value of Planning in the Urban Development Essay - 2

Value of Planning in the Urban Development - Essay Example Traditionally, planning concentrated on improving the physical conditions of the houses and streets with response to prevailing circumstances whereby this contrasted with the new aspects of urban planning. City planning process is highly a complex matter, as it must put into account characteristics together with the long-range welfare of people of that particular urban area or city (Barney, 2006). It follows a systematic process that involves series of studies and surveys, land-use development plan, transportation system plan, budget preparation, not forgetting that it has to receive a unified master plan approval from several agencies or legislative bodies. The history of urban development is a controversial subject in the States. Many planners at the beginning of 19th century used to consider total costs of urban planning as opposed to its benefits and this drew a very different picture. However, as time went by, urban community planning took a turning point with a progressive approach where planners concentrated on maximizing the difference between costs and benefits and did not necessarily consider minimizing costs. This was in response to the fact that minimizing costs and at the same time minimizing benefits was of no value, not to the planners nor the country. The history of America reveals that American cities increased and expanded in the early decades of the 19th century. Due to their expansion, geography born rise to city planning as most of the vital roads by then existed for two centuries earlier. As a result, they winded up in erratic areas in local ways and around topographic areas. Afterward, the presence of equipment for construction permitted straighter roads hence planners persisted in establishing a good framework for developing the urban community transportation network (Weiner, 2008).Â