Categories
Free Essays

Essay on Open Source Software

Introduction:

The concept of open source affects many fields of study from computer software and hardware to architecture, health, science, robotics and even politics. Linus Torvalds predicted this proliferation of source openness when he said: “the future is open source everything”. But another founder of open source movement, Eric Raymond, refused the using of this concept about applications outside software (Wikipedia, 2009).

Free/open source software (F/OSS) is accessible software where end source code is available for the user. It is not limited to software as applicable ones, it exceeds to allow beneficial to read, modify and recreate the source code (V. Hippel, V. Krogh, 2003). So F/OSS is usually provides users with source code and information needed to apply their changes on them.

The ability to run the program as the user wishes is one of the essential freedoms which Richard Stallman, the founder of free software and defender of open source, was confirmed in an interview. These freedoms are: the freedom to study how these software works, the freedom to change it according to project’s requirements, the freedom to redistribute it and the freedom to distribute your modified copy to others (Reilly 2008).

In Raymond opinion “good programmers know how to write, great ones know what to rewrite”, and he illustrated that it is almost easier to use an already existing solution to start with than to start from nothing at all. But this involves developer in difficult-to-be-solved problems if choice is not suitable. Linux operating system was not written from scratch where Linus Torvalds started by studying ideas from Minix “a tiny Unix-like OS” and then reused it according to project’s purposes (Raymond, 1999)

Beginnings:

It was a big surprise for those who used to pay for software to be told that groups of volunteers create high quality software and produce it to the community for free. The idea of FOSS began in 1960s. In this decade commercial software was not available and researchers were in need to share software code. As a result, they started to share source code in a limited framework.

“Open sharing of software code was a common practice in the MIT Artificial Intelligence Laboratory in the early 1960s and in similar laboratories such as Stanford and Carnegie Mellon” (Moon & Sproull 2002).

After that developers and users gave the idea more attention. Consequently, foundations of free software have been established in 1980s when Stallman called for free software and claimed that software should be common. In an interview, Stallman confirmed that computer users could not use the proprietary software come with most computers in the 1980s. So, such software keeps users “divided and helpless”. Stallman was dissatisfied with that situation and he started the free software movement in 1993 when he wrote the GNU open source operating system (M. Reilly, 2008). The general public license of GNU operating system allows users receive all their rights in essential freedoms mentioned above. In 2005 the idea achieved its goals in software filed and became more trusted by users and developers (Raymond 1999).

Wikipedia is a known example of F/OSS. It is a free encyclopedia started in the beginnings of 2001 by means of highly qualified contributors. It provides 19 free encyclopedias in 19 different languages and its content has been created by user contributions.

Many other examples like Apache web server, BIND name server and Linux operating system kernel are free for any user to use, amend and share.

Motivations:

The motivations of Stallman to produce free software are his strong belief in freedom, particularly “the freedom for individuals to cooperate” (2003). But what are the incentives other developers have to become contributors in open source projectsIn other words, why do programmers volunteer their time and experience without any financial returns to create free software?

Raymond is one of the first GNU contributors, a developer of many net open source software and a significant participant in Linux operating system development. He indicated that Linux project was going from “strength to strength” and the reason was the “bazaar” model of the Linux development style in which all contributors worked hard as at individual projects. He added that the democratic atmosphere in bazaar model motivated him and his partners to work hard regardless of financial returns (Raymond 1999).

The Linux creator, Linus Trovalds, says: “I am basically a very lazy person who likes to get credit for things other people actually do” (Raymond 1999). Torvalds , as he stated in his book ‘Just For Fun’, has an early interest in computing, he does not seem to take himself too seriously, he is a lucky guy who can provide a career for himself, and he finds a lot of fun when he writes software code.

It is surprising that hackers are also a significant motivating factor; they lead developers to impress their peers, gain a better reputation and raise their ranks in society (Zaleski et al. 2001).

Wikipedia showed, in a study made up by Wikipedia administrators, that the reason for their participants to be a part in such free work is the desire to create a benefit thing that helps others and meets their requirements (Wikipedia 2010). While the basic motivations for corporation in learners open source community are learning specific topics, learning how to be future learners and projects creating.

Advantages:

Software is characterized by many factors:

Its cost, where the lower price is more preferred and thus free is the most.
Voluntary work, where volunteers are motivated towards the project and they are interested in, which means that they do their best.
Continuously tested by all participant and users, hence it is almost free of bugs and errors.

These factors refer that open source software is likely to be the best solution for any project if needed features are provided. Besides, developers have created it according to own needs which means that it is in a high level of quality and efficiency.

F/OSS has many advantages related to development cost and time, bug correction and independency. Time and cost are essential factors in software development and they can be exploited by using of OSS which reduces the number of programmers employer has to pay himself, provides a ready tested code from other projects and thus reduces the time it takes to build, test and develop. Besides that creating software by many developers, each has revised and corrected its errors and each has a different background, leads to less bugs and faster detection and correction. Linus’ law refers to this idea ‘Given enough eyeballs, all bugs are shallow’ (Answers.com, 2009).

Openness of source code provides communication paths and interactive communities. F/OSS community in schools consists of researchers, learners and teachers; each listens to others and respects their opinions. It is expected in this style of community to switch roles among its members, where students may be assigned particular roles to take on for the studied project. So they can share their ideas in all project’s aspects. On the other hand, sense of control is uncommon in most classrooms, so one of learner’s community’s advantages is to prepare students for future life by involving them in the experiment of leadership (H. Baytiyeh, J. Pfaffman, 2010).

Another advantage for open source software, which Zaleski stated in his article, that open source innovation was the reason for Linux operating system to move quickly from being an ambiguous operating system used by programmers and hackers to an essential operating system in business area (2001).

On the other hand, open source technology resolved the problem of knowledge transfer in developing countries. Direct import of software not only costs these countries high amounts of money, but also puts them in complex troubles where they do not know how to develop this software to meet the local needs (Alkhatib 2008).

Why do some organizations still buy commercial software instead of using free ones?

The voluntary of open source projects and the relative lake of financial support make them far from marketing and advertising. This means that many organizations have not been informed that free solutions relevant to their needs are available freely. This “knowledge gap” cause many other barriers. Some managers do not know how to implement and use open source applications and they may be unaware of the range of services provided with such applications like support services and consultations.

To cover this knowledge gap, an “up to date” archive of open source applications is available in SourceForge.net website. This website consists of more than 131,000 open source applications with their latest software updates where the accessibility is allowed for any organization to find suitable free software according to its requirements. Further, assistance with the technical issues of open source applications implementation is available by many open source consultants like IBM, Red Hat, and Open Sky Consulting.

Forking is another reason for not using F/OSS. The independency between open source software developers groups leads to different versions of same software. Although these versions started with the same source code, they are not able to interoperate because these groups create their own versions without coordination. This phenomenon is called “forking” and it is the responsible for open source software fragmenting. As a result, the open source BSD-Unix community was divided into three portions in early 1990s, and Emacs text editor and NCSA web server are other examples where both forked into two divisions in 1992 and 2995 respectively.

In Nagy’s opinion, forking is dangerous because it causes inherited fragmentation for both of the original software adopters and marketing of relative applications. Many versions of one software leads adopters to choose one to support, consequently, software will not gain the critical mass of adopters it aims to do. On the other hand, venders will be put in a point of choosing to support one of forked versions or all of them in their own applications. In this case, some adopters and vendors decide to wait for a standard version or to stall their adoption and supporting (NAGY et al. 2010).

Conclusion:

No one can predict the future of software, but developers can expect that open source software will be stronger and gain increased faith from traditional software industry.

Historically, one can recognize the discontinuities appeared between IBM System in the 1960s, first PC in the end of 1970s and the open source movement in the 1990s. So it is expected that this technology gap will take place in the next 10-15 years for a new software innovation (Campbell-Kelly 2008)

Green IT

Introduction:

IT has brought many significant solutions for environmental sustainability, but at the same time, it caused a lot of problems especially in data centers where energy is consumed enormously (Murugesan 2010)

Hopper, a professor of computer technology at the University of Cambridge and head of its Computer Laboratory, claimed that “the system we now employ is hugely wasteful” and he proposed to create new systems which are more efficient, less expensive and help in reducing energy consumptions; because he believed that moving data is cheaper than energy (Kurp 2008)

Computers impact environment from the first stage of producing to the last stage of disposal. Moreover, increased consumption of energy leads to more greenhouse gas emissions because the main source of energy is coil, oil or gas burning (Murugesan 2010)

Since environmental problems come from each stage of computer’s life, green IT must covers all of these areas, from designing to manufacturing and use end with disposal.

In the article Harnessing Green IT: Principles and Practices, San Murugesan defines green computing as “the study and practice of designing, manufacturing, using, and disposing of computers, servers, and associated subsystems -such as monitors, printers, storage devices, and networking and communications systems- efficiently and effectively with minimal or no impact on the environment.” (Murugesan 2008)

Suggested solutions:

Dell and Hewlett-Packard are two computer manufacturers. They decided to solve the problem by retooling their products. On the other hand, the solution from the standpoint of David Wang, the data center architect for Teradata, is not to replace all old computers by others which are more environments friendly. He confirmed that attention must be drawn to increasing power consumption as well as to heat removal in data centers (Kurp 2008).

Murugesan has illustrated areas and activities which are involved in green IT solutions as the following:

– Environmental friendly designing;

– Energy-efficient computing;

– Power management;

– Location and architecture of data centers;

– Server virtualization which has been explained before;

– Responsible disposal and recycling;

– Regulatory compliance;

– Green metrics, assessment tools and methodology;

– Environment-related risk reducing;

– Use of renewable energy sources; and

– Eco-labeling of IT products (2008).

Other solutions have been produced by Hasbrouck and Woodruff. They suggested two strategies for green computing:

Reduce computing technology’s contribution to the problem by producing energy-efficient computers, take reusability into account during computers’ designing, use less materials and work toward computers’ and related systems’ recycling. Moreover, they indicated that truing off inactive computers, using energy-efficient devices and reduction of emissions emitted from computers’ manufacturing are significant parts of this strategy.
Give computing a role in resolving the issue by creating green applications which enable design green objects and green processes such as design green buildings, invent source of renewable energy and design fuel-efficient aircraft (2008).

Most efforts in green IT are directed towards the first strategy to solve environmental problems which have increased along with computers’ using increase.

As a result of these problems caused by computers, many associations are turning to green computing to save money and reduce waste. To do so, Dick Sullivan listed five major trends:

Virtualization in all forms especially for servers, storage and network environments. In other words, transform entire machines into software-based entities. For instance, a room with five servers can be replaced by an efficient server provided with high performance software.
Utilize the cloud computing where no need to have own data centers, own big servers or storage systems. Many organizations need only a small amount of proprietary equipment and functionality. In this case, they can basically purchase what they need from someone else who will be responsible for the security, power and maintenance.
Sullivan confirmed that “a huge amount of data is basically an exact duplicate of other data”, so converting to intelligent compression or single instance storage can eliminate this waste and cut the total data storage needed.
Solid-state disk (SSD) has no moving parts and is not magnetic, so it is a stronger, safer and faster way to store and access data.
Everyone can make impact and be a part of green computing project when s/he has more awareness of her/his direct and indirect daily computing habits. Employees, for example, can support green computing if they use to turn off computers not in use, banning screen savers and shorten the turn-off times when computers are inactive. On the other hand, printing waste a lot of papers, so managing this daily process by printing only as needed and adopting double sided printing will make a significant impact (Clarke 2009)

Many efforts have been made to support the idea of green IT. Climate Savers Computing Initiative (CSCI) is one of these efforts. It seeks to reduce electric power consumption of PCs and it has established a catalogue of green products from organizations involved with, in addition to helpful information about reducing PC energy consumption.

This initiative is a group of consumers, businesses and conservation organizations formed in 2007, it has gained brilliant results where 50 percent of energy consumed by computers was economized by 2010, and it was able to reduce global CO2 emissions from the operation of computers by 54 million tons a year (Wikipedia 2010).

Motivations:

To enforce computer users to subject to green IT solutions and apply them on their daily routines, government can face them with more green taxis and rules.

But it is better, in my opinion, to raise people awareness toward the danger that threatens the Earth if they continue using traditional computers in traditional methods, and to teach them the benefits of green IT.

Applying green IT issues in all affected areas offers individuals and organizations financial benefits where IT operations achieve better energy efficiency through green initiatives. In a survey made by Sun Microsystem Australia, 1500 responses have been collected from 758 different-size organizations. Almost of these responses illustrated that the main reasons for using green IT practices are reducing energy consumption and get lower costs.

As a result, most companies started to prioritize environmental issues. Moreover, institutions and corporate ask their suppliers to take into account how to “green up” their products and manufacturing processes. Not only companies but also people began to adhere to environmentally friendly issues of IT (Murugesan 2008).

Green IT approach:

As it has mentioned above that environmental problems caused by computing should be addressed by a holistic approach which include solutions for all areas affected by using computers.

This approach, as it has been explained by Murugesan, consists of four concepts:

Green use that aims to reduce energy consumption and use computers in an environmentally friendly manner.
Green disposal where computers, related system like printers and electronic equipments should be reused, refurbished or recycled.
Green design where new computers, servers and cooling devices can be designed to be more energy efficient.

Green manufacturing which aims to adopt the process of computers and sub-systems creating that minimize or get rid of its impact on the environment (2008)

References:

2003. Richard Stallman: Freedom–His Passion Both For Work And In Life. Electronic Design, 51(23), 112.

Answers.com, what are the advantages and disadvantages of open source software and why?, [Internet]. Available from: http://wiki.answers.com/Q/What_are_the_advantages_and_ disadvantages_of _open _source_software_and_why, [Accessed 30th November 2010]

Campbell-Kelly, M., 2008. Historical Reflecions Will the Future of Software be Open SourceCommunications of the ACM, 51(10), 21-23.

Clarke, K., 2009. Green computing trends you should know. Associations Now, 5(8), 19.

Hasbrouck, J. & Woodruff, A., 2008. Green Homeowners as Lead Adopters: Sustainable Living and Green Computing. Intel Technology Journal, 12(1), 39-48.

Kurp, P., 2008. Green Computing. Communications of the ACM, 51(10), 11-13.

Michael Bloch, Open source software in your online business -advantages/ disadvantages, 1999-2010

Moon, J.Y. & Sproull, L., 2002. Essence of distributed work: The case of the Linux kernel. In P. Hinds & S. Kiesler, eds. Distributed work. Cambridge, MA US: MIT Press, pp. 381-404.

Murugesan. S., 2008, “Harnessing Green IT: Principles and Practices,” IEEE IT Professional, January–February 2008, pp 24-33.

Murugesan, S., 2010. Making IT Green. IEEE Computer Society, Vol. 12, No. 2.

NAGY, D., YASSIN, A.M. & BHATTACHERJEE, A., 2010. Organizational Adoption of Open Source Software: Barriers and Remedies. Communications of the ACM, 53(3), 148-151.

Raymond, E., 1999. The Cathedral and the Bazaar. Knowledge, Technology & Policy, 12(3), 23.

Reilly, M., 2008. Interview: Richard Stallman, one of the founders of “free software”.

Vidyasagar Potdar and Elizabeth Chang (2004) Open source and closed source software development methodologies. Proc.of the 4th Workshop on Open Source Software Engineering, pages 105-109, Edinburgh, Scotland, May 25 2004.

Wikipedia the free encyclopedia (2001) Open source [Internet]. Available from: http://en.wikipedia.org/wiki/Open_source, [Accessed 4th November 2010]

Zaleski, J. et al., 2001. JUST FOR FUN (Book Review). Publishers Weekly, 248(17), 60.

Categories
Free Essays

The purpose of the risk descriptions study is to explore the risk predilection on the effectiveness of project software.

Abstract

The delivery of comprehensive web software designing projects within the desired financial and quality attributes can be delayed on time due to the organizational problems, high rate of complexity and variable risks impacts in software project. The supervision and development process of web software designing is more difficult. So, it is important that to specify and investigate these software risks within the proper assessment and management. In this paper, the analysis and assessment of these risks should be done by the using of probabilistic measurements. This analysis should be useful for the qualitative and quantitative measurements of risk in the web designing and implementation of effective risk management process.

Risk Recognitions

Risks are future unsure tasks with a likelihood of incidents and a prospective for failure. Risk classification and management are the major anxiety in every software project. The effectual analysis of software risks assessment will be helpful for the competent planning and tasks of effort to classify risk attributes. There are aspects of risks in web software designing project, internal and external risks. The internal risks should be related to the risk issues within the organization and external risks are out of the organization and these are complicated to handle in project. A web software projects also have a high degree of uncertainty due to the explicit requirements from end users side and modified technical qualities. So, the critical factors or risks that’s could affect the change of project phases and establishment area are as follows, Inappropriate designing, Impracticable scheduling and budgetary planning, Impracticable scheduling and budgetary planning and Inadequate quality attributes verification and user satisfaction. The flow diagram of web software risks are given below

Risk Descriptions

The purpose of the risk descriptions study is to explore the risk predilection on the effectiveness of project software. The different level of risks planning and strategy is an important parameter that’s having a variable affects on project establishment/firm. So, the low level of risk strategies affects the expected outcomes due to unpredicted events. While high level of risk strategies should provide higher profitable outcomes. The different aspects and methodologies of risk specifications/Descriptions are as follows

Problems/Events of web software designing risks:

The following events or problems of the web designing risks are as follows

1)Inappropriate Designing:
Incessant varying constraints
A lack use of advanced technology
Complexity in Implementation section
Complicated project modules assimilation
Failure to address priority inconsistency
Failure to admit the responsibilities
Inappropriate instructions, scheduling and communication between team members
2)Impracticable scheduling and budgetary planning:
Inadequate information of software tools
A lack of project estimations
Sluggish management life cycle
Lack of budgetary resources
Inadequate knowledge about technical equipments
Inappropriate security plans
Inadequate documentation of designing methodology
3)Inadequate quality attributes verification and user satisfaction:
Lack of observation and unrealistic changing
Inadequate budgetary plans
Wrong time changing in technology and management process
Failure in financial planning
A lack of commitment between end users and software developer
Lack of project definitions and attributes
Varying customer invention policies and priorities

Risk Impact and Probability Ranking:

The probability and impact array shows a risk rating task for the identification of risks. Then each risk is rated according to its probability of incidence and impact upon purposes. The overall assessment of risk impact ranking show that the appropriate services level within the project is affected or stable.

Risk: It is the product of probability and impact factor

Risk=Likelihood Impact influence

Probability: The chance of the risks/events occurring or not

Impact: The outcomes/end result if the risk should be occurred

The risk impact matrix definitions are

Impact Criteria

Impact

Explanation

Crucial

(C)

If the risk actions is occurred then project will collapse
Severe

(S)

If the risk actions is occurred then project will be encounter, main portion of expenditure is increases
Modest

(Mo)

If the risk events is happened then the project will be encounter, a modest portion of project expenditure is increases
Minor

(Mi)

If the risk actions is occurred then the project will be encounter, a minute portion of project cost is increases
Negligible

(N)

If the risk actions is occurred then nothing will be changed in project expenditure

(Fig.2)

The probability chart/array definitions are

Probability Criteria

Probability

(%)

Explanation

0-20

Rare chances that risk will occur

21-40

Improbable the risk will occur

41-60

Even Probably the risk will occur

61-80

To be expected that risk will occur

81-100

Extremely that risk will occur

(Fig-3)

The table of risk awareness area and risk conditions of the project gives the impact of risk occurrence which is useful to decide the overall efficiency of working. The table of risk impact in different dimensions is given below

Risk Severity Table

Risk Numbering

Risks

Probability (%)

Impact

Intensity

(R1)

Poor mission strategy and Targets

15

C

Low

(R2)

Poor decision-making Management Support

25

S

Medium

(R3)

Lack of Customers Participation

12

Mi

Low

(R4)

Lack of Technical Knowledge

40

C

Medium

(R5)

Unclear statement of requirements & objectives

65

S

High

(R6)

Lack of project scheduling & estimating

42

Mo

Low

(R7)

Lack of advanced Technology use

75

C

High

(R8)

Lack of Managerial Experience

62

Mo

High

(R9)

Lack of communication gap b/w staff members, stakeholders and end users

70

C

Medium

(R10)

Lack of budgetary Supply & Planning

60

S

High

(R11)

Lack of key Stakeholders involvement

35

Mi

Low

(R12)

Customers unconvinced at project deliverance

40

Mo

Medium

(Fig-4)

The risk impact and probability ranking graph/chart is based on the two fundamental dimensions of risk. In this graph, the risk that will be occurring is plotted on the abscissa (x-axis) and impact of risk on ordinate (y-axis).

By the using of these two measures, risks are plotted on the graph. This scenario provides a swift and apparent vision of the priorities that should be needed in each stage. Then we decided that what contingency plans should be needed to handle and managed the meticulous risk areas.

The fundamental structure of the risk impact and probability ranking graph is shown below

Risk impact and Probability Ranking Graph

(Fig.5)

= Low

= Medium

= High

Extenuation and Eventuality Plans
Risk Extenuation/Mitigation Planning:

The risk extenuation is the technical strategic/sub process of risk management to address the risks that occurred in an organization/project. The purpose of risk mitigation is to identify the risk factors in the organization and operational system, measure their effects and reduce these risk factors before the implementation of project. The risk mitigation is the systematic technique that used by the management team to minimize the risk factors and stabilized the project strategy and plans. The risk extenuation planning can be achieved by the using of following methodology options:

Risk Postulation:
In this, accept the risk with the consideration of eventuality planning

and continue operating system of development to minimize the risk up to standard intensity.

Risk Evading:

In this, avoid the risk by the eradicating of risk factors and use alternative approaches to reduce the risk at lower level.

Risk Mitigation:

Reducing the probability and impact factors by the using of different methodologies and continually check the circumstances of risk conditions. It includes the appraisals, surveys, risk avoidance milestones etc.

Risk Transference:

In this, transfer the risk to other party when an opportunity of risk reduction is available.

Risk Rescheduling:

In this stage, rescheduling all portion of project strategy. So that risk level is less likely to take place.

Eventuality/Contingency Planning:

The eventuality planning is logical approaches of risk management to identify the risk impacts that take place in project. The contingency strategy can be used before and after the occurrence of risks. The contingency planning should be reliable when the following aspects should be considered:

Analyze which key risks are potential and jeopardize the project.
Prioritize the risks in the conditions of their intensity.
Analyze which risk events are under control and which ones are out of control.
Evaluate the contingency planning on those risk events that are under the supervision of management team.

The risk extenuation and eventuality planning flowchart is shown below

(Fig.6)

(Fig.6)

Therefore the proposed extenuation and eventuality strategies of the selected risks within the project is as follow

Inappropriate Required Statements:

This type of risk is a major factor that affects the project scheduling, quality attributes and budgetary plans. Such type of risk is preventable if the proper mitigation tasks should be utilized and these are

1) Control the comprehensible understanding of stakeholder’s requirements.

2) Built up the communication system and reduce the gap between customers and project members.

3) Split up the end users into different categories according to their field of interest.

4) Deliver clear feasibility requirements to end users and stakeholders throughout project.

5) Provide a workshop of group based requirements and design technique to the end users, stakeholders and project members.

Wrong Cost Estimation:

Adequate cost estimation is the major problem in the web software planning process. The precise cost estimation is the valuable success factor in the software planning as well as in risk development cycle. The cost estimation of software planning depends upon three factors

a) Individual exertion

b) Time interval

c)Budgetary resources

The mitigation and contingency strategies of such type risk are as follows

1) Reduce the wrong cost estimation risk by the suitable selection of best cost estimation model in earlier time.

2) Use Empirical Cost Estimation (ECE) and Analytical model to minimize and control the inappropriate investment.

3) Provide an appropriate size method of conclusion b/w current and past projects.

4) The selected cost estimation model and complexity of projects should be explained to the stakeholders.

5) Hire experienced cost estimators that avoid the complexity of project cost before starting.

6) When the duration of the project unknown then squeezed down the project activities into significant path.

7) The project manager should visualize the cause of delay and discussed with stakeholders and team members for alternative route.

High stress of users than expectation:

Such type of the risk impact is avoidable and having a less impact intensity on overall progress. The behavior of end users in sometimes is variable and creates problems in the development of project. Their mitigation strategies are as follows

1) Built up the generous level of communication and understanding b/w customers and deliver clear features of requirements. Then the unrealistic stress b/w users and staff members are controlled.

2) Provide a non serviceable prototype to end users if they are overexcited in the progress of project.

Lack of use of Advanced Technology:

This type of the risk factor create a difficulties in operation section and such type of factor is occurred due to the following technical issues

1) Inadequate sources of skilled equipments for explicit technology.

2) Supplementary funding is required.

3) Failure to evaluate the maintainability.

4) Client conflicts for the technology.

The mitigation strategy of such type of situations is as follows

1) Select the appropriate use of technology according to the condition of project in the starting time so that all issues should be considers in mean time.

2) The association of best technology use should be carried out through the project. The association of best technology use should be carried out through the project.

3) The key stakeholders of the project should be familiar to the use unambiguous technology.

Lack of use of training on Equipments & Inexperienced Staff:

The impact of such type of risk is minor and remunerated by the using of different techniques. The impact of such type of risk factor is not transferred to the next stages. But develop a short duration of delay. Their mitigation strategies are as follows

1)If an appropriate funding is not available then project manager should negotiate with less experienced staff with supplementary training.

2) The slack activities can be used in the next stage of training.

3)Select a careful consideration of staff member’s selection. consideration of staff member’s selection.

4)The experienced staff should be selected for significant task and they make sure that no delay is probable during the project process.

Budgetary Prepositions of Mitigation and Contingency Plans:

In this process, the budgetary analysis of all resources, implementation control of project aspects and possible mitigation & contingency feasibility reports should be conducted for the purpose and establishment of financial circumstances. The budgetary prepositions analysis can be qualitative/quantitative and utilized for the purpose of risk reduction in different level of project stages. So, the Expected Monetary Vale (EMV) can be used for the measurements of budgetary prepositions of mitigation and contingency plans.

(EMV) is the product of likelihood risk occurrence and monetary outcome of risk occurrence.

Mathematically,

It can be expressed as

EMV=Probability (P) Monetary Outcome (O)

The EMV of the project can be calculated by using this expression

P (EMV) =Project expenditure + Risk Impact cost – Occasion probable cost

Where

Project expenditure: Preliminary estimated cost of the project

Risk Impact cost: The cost for the reduction of probability/Impact of project risks

Occasion probable cost: The cost cover risk that occur during the project

In this process, risk is directly proportional to cost and inversely proportional to profit. On other hand, the probable occasion is directly proportional to profit and inversely proportional to the cost.

According to web designing process of project InterLinc, the budget descriptions for web server, internet services, client training, networking server, information support centre, quality assertion etc are given below

Budget Summary

Services

Estimated Budget for Services ($)

Quality Assurance

6500

Web Server & Web Training

22000

Network Server

19800

Web Browser Access

38400

Telecommunication Services

45625

Customer Training Services

4456

Miscellaneous Services

14165

Total Cost

150946

(Fig -7)

Now, the expected (EMV) of mitigation and contingency plans for the avoidance of risks from web software project is shown below

EMV Budgetary Summary

Risks No.

Risk Conditions

P (%)

O ($)

EMV ($)

R1

Lack of use of Advanced Technology

0.75

67500

50625

R2

Lack of training on equipment & Inexperience staff

0.65

4850

31528.9

R3

Wrong Cost estimation

0.42

58602

24612.84

R4

End users dissatisfaction

0.12

3560

427.2

R5

Inappropriate Supply requirements

0.55

21280

11704

Total

118897.94

(Fig-8)

Risk Management Techniques

The main objective of risk management techniques is to identify the potential/hazard risk factors on the point of view of their technical management aspects before they take place in the project. There is lots of techniques methodology that manage the risk factors within the project but we have to discuss only two of them.

Ishikawa/Fishbone Diagram:

Fishbone diagram is also called cause and effect diagram. This diagram are use to explain/analyze the causes and effects of risks that occur in the organization/project. The fishbone diagram for the effects and causes of risks within the project are given below

1)Web designing risk:

(Fig.9)

2)Scheduling and budgetary planning risk:

3) Quality and User Satisfaction Risk:

RISKIT Analysis Graph:

The RISKIT Analysis Graph is a diagrammatical formulation that can be used to identify the different features of risk factors that are more officially. This method is look like a tool that provides a link of communication during risk mitigation and contingency planning. The RISKIT Analysis Graph is useful for the decomposing of potential risks process into well defined elements. The diagram of their different elements are shown below

The RISKIT Analysis Graph for the selected risk factors, causes and effects within the project are given below

1)Web Designing Strategy:

2) Scheduling and Budgetary Planning :

3) Quality attributes & Customer Satisfaction:

Valuable Risk Management Lifecycle:

The risk management life cycle is designed for the quick response of risk in the organization/project. So, the valuable management lifecycle of the RISKIT Analysis Graph is given below

Appendix A: Table of Figures

Figure 1: Risk Classification

Figure 2: Impact Criteria

Figure 3: Probability Criteria

Figure 4: Risk Severity

Figure 5: Risk Impact and Probability ranking graph

Figure 6: Risk Mitigation Plan

Figure 7 & 8: EMV Budgetary Summary

Figure 9 to 11: Fishbone Diagram

Figure 12 to 15: RISKIT Analysis Graph

Figure 16: Risk Management Lifecycle

Appendix B: References

Kwak, Y & Stoddard, J 2004, Project Risk Management,5th ed. Technovation: 915-920.

Boehm, B.W 1991, Software Risk Management, New York: McGraw-Hill, Vol.41 pp. 32-41.

Han, W & Huang, S. 2007, An Empirical Analysis of Risk Components and Performance on Software Projects. Systems and Software, Vol.80, pp.42-50

Sengupta, B & Chandra, S 2006, A Research Agenda for Distributed Software Development. Shanghai: ACM. pp. 731 – 740

Offut, J 2009: Quality Attributes of Web Software Applications, IEEE Software, vol19, pp. 25-32.

Keshlaf, A & Hashim, K 2000, A Model and Prototype Tool to Manage Software Risks, IEEE Computer Society. pp. 297–305.

Charette, R 1989,Software Engineering Risk Analysis and Management, New York, McGraw-Hill. pp. 10-15.

Sommerville, I 2004: Mitigating Risk with effective requirements, New Jersey, pp 12-28.

Boehm, Barry W. 1997, Introduction to Software Risk & Risk Management, viewed on 20 April 2011,

.

Booker, G.2003, Software Project Risk Management & Organization, INFO 638, Lecture #2, Drexel Univ., viewed on 25April 2011,

.

Hashemian, V.2003, The Riskit Method for Software Risk Management, Univ. of Waterloo, viewed on 6 May 2011,

.

Hall, Elaine M 1998, Managing risk: methods for software systems development, Addison-Wesley, Reading, Mass.

Joyce Statz, Don Oxley & Patrick O’Toole 1997, Identifying and Managing Risks for Software Process Improvement, Crosstalk, pp. 13-18.

Categories
Free Essays

Mark Zuckerberg, an American computer scientist, software developer and philanthropist

Introduction

“When you give everyone a voice and give people power, the system usually ends up in a really good place. So, what we view our role as, is giving people that power.” Said by Mark Zuckerberg.

The above quotes is said by the founder of facebook, Mark Zuckerberg, an American computer scientist, software developer and philanthropist best know for creating the social networking site- Facebook.(Kroll, 2008) Despite his young age of 26, he is already a billionaire and had awarded the Time Person of The Year 2010.(Grossman, 2010)

Facebook is a social networking website launched in February 2004 and operated and privately owned by Facebook, Inc.(PenNameEM, 2011). It initially targeted Harvard students, but was later opened to other universities and then high schools. The website’s membership was initially limited by the founders to Harvard students, but was expanded to other colleges in the Boston area.

It later expanded further to include any university student, then high school students, and, finally, to anyone aged 13 and over. In 2006, Facebook allowed everyone to join and also added a News Feed feature that would broadcast changes in members’ pages to all Facebook users identified in their personal network of friends. It turned Facebook into a personalized social news service (Farlex, 2010).

Facebook has a lot of features, (Social Media Boomer, 2009) like for example; people can add friends and send the messages, messages can be sent privately or publicly. A wall which allows friends to post messages for the user to see and use can tell what he/she up to by updating his/her status update. The news feed publishes updates on every user’s homepage and computer seen by the friends of the user. Facebook notes is a blogging feature that allows text and images and some blogs can be imported and Facebook. Facebook has a chat feature which allows instant messaging to friends.

Mark Zuckerberg knew the power of social network more than anyone else and thus had created Facebook which had evolved to an essential-must-visit-site-each-day to many people all around the globe. The successful of Facebook had well enough to convey the importance of social network and had attracted a number of business people to venture the field.

As Mark Zuckerberg said, “the web is at a really important turning point right now.” More and more people had engaged in social networking, resulting with a common norm in voicing out your opinion at the website instead of in front of the public, making friends blindly and deviatory exposing your whereabouts which in terms making the individual becomes more fragile to various danger .

While on the bright side, social networking do have several advantage, It helps people to keep in touch despite geographical distances and it’s a much cheaper medium compare with the others. In short, facebook have its pros and cons, people have to aware the impact, be a smart user and not to fall prey on the downfall.

Background

Studies shows that facebook users had tremendously increase globally and how serious people get addicted to facebook.(Foster, 2010) This phenomenon had brought up both positive and negative issues. People start to realize the power of facebook politically, economically and nevertheless socially. A lot of investments from well-known people had made to the facebook company. Thus, facebook had been upgraded and had expanded to all its potential direction, and yet meets no limitations.

The website currently has more than 400 million active users worldwide. It is undeniable a huge phenomena and had affected other social website as well. At January 2009 Facebook has defeated MySpace. (TechCrunch. 2010)

Since people are become more and more attach to this particular social networking website, it somehow had stirred up some important issue to the society. Facebook has somehow crush with some rules and regulations and has been banned in some countries including Pakistan(Aljazeera,2010), Syria(Global Voices, 2010),China(Youth Radio, 2010),Vietnam (CNN. 2010), and Iran( Telegraph Media Group, 2010).It has also been banned at many places of work to discourage employees from wasting time using the service. (Facebook. 2010)

Facebook settled a lawsuit regarding claims over source code and intellectual property. The site has also been involved in controversy over the sale of fans and friends. Privacy issue had been hotly debated among the people.(Facebook. 2010) Single status updated in facebook will allow the whole network friends knew what you are up to. Nevertheless the photo album posted, strangers may happen to approach to you on the street despite the fact that you do not even know them. Stalkers issue is also a triggering phenomenon.

On the other hand, facebook had its positive side. It had been seen as a wide, rewarding potential market to the investors. Since it was a global phenomenon, international marketers utilize this fact and met its satisfying result. Advertise in facebook is also an effective way to reach target audience.

Facebook has undeniable brings double edge effect to the life of many, and thus the studies of awareness of public both positive and negative impact of Facebook will be further examined by the researcher.

Problem statement

“With the creation of Facebook in 2004, colleges and universities across the United States have been playing catch-up with students. This new technology carries much weight as a new medium for students to build social connections and grow as members of their institutions. However, this new technology also brings negative implications such as lowered GPAs when with greater use,” proclaimed by Boogart. (2004)

People may not aware of the double-sided impact of facebook in their lives. They only realize how social networking crushes them at the point where they had become the victim of social networking.

Addiction to facebook will cause a lot of problem in our lives, such as widen the gap between friends and family as one willing to stick on the computer screen rather than socializing with people around. Seriously attach to facebook will also be an obstacle in performances. One may affect productivity during office hour due to the overly attached to facebook, student’s grades may fall due to this too.

On the other hand, facebook was undeniable a convenient place to socialize and keep in touch. Do the public realize both the positive and negative impact of facebook to the society and are they the smart user or blind user of Facebook?

Objectives

The main purpose of this research is to determine the level of awareness among Malaysians regarding the impact of facebook in the life. Besides, it is also designed to identify media preference and consumption patterns of the public. Those findings will help in further effective message creations. The researcher strongly agrees that the public should have a better understanding of the double-edged impact of facebook, how it will affect their life and the potential direction/growth of facebook in the future

Research Questions

What is the role of media in creating awareness
What is the use of Facebook
What is the impact of Facebook

Significance

Social networking had been a big hit all around the world, and has undeniable becomes a necessity for people all around the world. Most of us have limited knowledge about the pros and cons of social networking and are one of the blind users who use the site for granted. Majority of the people do not dwell much on the core meaning of social networking and thus had neglected many important issues which then had cause some serious agitation both positively and negatively.

Upon all of the incidents that had happened all around facebook, it is necessary to alert the public about the double-sided effect of facebook. By listing all of the advantage and disadvantage of the impact of Facebook, It will aid in creating public’s awareness in personal security issue, social networking fraud and etc, such effort will certainly minimize the potential harm and also to brighten the horizon of the people about the emerging possibilities of facebook in various field.

Limitation

The first obstacle face during the research will be the individual’s bias. As the research is about the double edge impact of Facebook, predictable and unavoidable incidents such as intensive individuals will hold tight with their own perspective and will defy the oppositional view of it. For example, a Facebook supporter will always stand on his/her ground and strongly oppose the negative sides of Facebook, and oppositional, Facebook hater will always discriminates Facebook despite the advantages it brings. Fair and just opinion from both oppositional sides is critical in aiding the accuracy of the research.

The next obstacle will be the sampling issue. Although everyone seems to be able to access to the Internet, there will still be minority of them whom are not familiar with the site. Have to be assure that the target sample is in alignment with the objective. Age range is important too due to the perspective and values holds are different along with age growth. Penetrating observation and estimation of the target sampling is essential in order to obtain accurate outcome of the research.

Besides, another problem that has to be deal is the potential fraud/dishonesty responses from the target audience. They might want to please the researcher and distort their very own opinion or are afraid to voice their perspective out. Therefore, it is crucial to explain to them that the above survey is highly confidential and for educational purpose only before spreading out the questionnaire to them.

Literature Review

The Role of Media

According to the journal “Role of media to engage the masses in water debates and Practices” by Shahzad (2008, p.2), the media is crucial in disseminating news, development facilitator and the agent of change in today’s times. Mass media is essential in creating awareness of various issues as to shape publics’ perception and opinion, particularly with reference to environment issue. Public escalating attachment to the information technologies had aid the growth of the importance of mass media. (Shahzad and Paquistani, 2008, p.2)

Media coverage does influence social flow. Shahzad and Paquistani (2008, p. 5) argue that in their recent coverage of the poor quality of water and sanitation facilities in public hospitals of twin cities of Rawalpindi and Islamabad, they are able to attract federal health minister and the NGO to take action and had solved the above problem. Their media coverage had inspired and moved the authorities to install the filtration plant. It shows that media is playing a crucial role in conveying the message to various parties and inspire changes.

Dorji (n.d, p.1) claim that communication is always the primary elements in society, especially in this era called “satellite communication”; mass media has indeed a necessity in human existence. Information and knowledge is being exchange through communication and with the advance technology, distance and other boundaries had been overcome. Various mass media had aid in communication among people all around the world, so as to globalization.

According to Menon (1981) (cited by Alahari, 1997) in the journal “Attitude towards Mass Media and its role in promoting Environment Consciousness” by Tshering Dorji , the function of mass media had been upgraded to serve a wider coverage at a faster pace worldwide. This indeed has helped the media to reach a wider audience. Moreover, media assists people all around to world to interact and connect with each other. (Dorji, n.d)

According to the journal “Attitude towards Mass Media and its role in promoting Environment Consciousness” by Tshering Dorji, it had provided evidence that the media did influence in peoples’ decision making. The more one’s expose to media, the more he/she will be affected by the media. According to the above case study, it stated that mass media has a potential power in fostering a kinship for environmentalism. For instances, mass media had reinforced the links between the environment preservation and culture heritage in Bhutan

Characteristic of Social networking Sites

According to the report ‘Social Computing: Study on the Use and Impact of Online Social Networking’ by Romina Cachia, there are generally 6 characteristics of SNS, which are (1)presentation of oneself: a profile page; (2)externalization of date: viewing and sharing information; (3) new ways for community formation: communicate through various digital objects; (4) Bottom-up activities: idea platform for user to gathered; (5) ease of use: homepages are easy to create and development; and lastly (6) reorganization of Internet geography: removed geographic barrier. Nevertheless, SNS has drastically changes our way of communication.

Differences between Social Networking Sites

Different website serves different people and attracts different types of users despite their similarity of SNS. Like for example, ‘MySpace’ was generally used by musician; YouTube links people through videos; Flickr links people through pictures and etc. (Cachia, 2008)

While on the other hand, Friendster was been utilized to get in touch with old friends and had attracted many youth due to its innovation. MySpace too was created based on that platform but had soon evolved into more of a music platform. On 2004, Flickr which had became known for its dynamic platform for sharing photos was emerged due to the popularity of SNS photo sharing. (Cachia, 2008) And up to date, Facebook with its massive of active users is the one most successful SNS.

Cachia argues that the simplicity and ordered profiles has contributed to Facebook’s successfulness. The vast number of application too has added in fun aspect to Facebook users. The ‘wall’ which allow user to post pictures, comment and link is also another factor that lead its popularity among the public.

Use of Facebook

Boogart (2004) argues that majority use Facebook to stay in touch with high school friends. He founds out only 21.1% of people use Facebook to connect to college peers. Boogart (2004) also states that demographic play a role in defining the user and usage. Women and students are two large populations whom are the active Facebook users. According to him, one will feel more connected to people if he/she use Facebook more frequently. These trends had further extended the addiction of students towards Facebook. Simply put, the more one engages on Facebook the more addicted one would perceive themselves to be.

According to the findings by Joinson (2008), he generally found out that people use Facebook to keep in touch with friends, social surveillance, re-acquiring lost contacts, to communicate by writing on wall or private message, sharing pictures, perpetual contact meaning just to find out people’s status, making new friends and simply because it is easy to use.

Besides, according to Joinson (2008), social network serves as social an emotional support, information resources and ties to some people. Lampe et al. (2006) (cited by Joinson, 2008) states that there’s a differences in ‘social searching’ and ‘social browsing’ in Facebook. Social searching is to find out more information about someone offline while social browsing is an act uses Facebook to further develop a relationship.

Lampe et al. (2006) (cited by Joinson, 2008) also notes social networking sites such as Facebook serves as a surveillance function, where by user is constantly updated about their friends and family, the groups where they belonged.

Impact of Facebook (Positive and Negative)

Positive Impact

According to the journal “Lessons from Facebook: The Effect of Social Network Sites on College Students’ Social Capital” by Kee, Park and Valenzuela (2008), Facebook can aid in unify the community. For example, collective action can be called up common interests groups. Facebook can aid in fostering trust and norms by constantly exchanging opinions and views among users.

Boogart (2004) indicates that Facebook had help to university administrators to connect with students in Campus. It helps them to feel ‘less stranger’ on the school ground and constantly help them to keep in touch with one another and school activities as well.

According to Merritt (2008) (cited by Kee et al., 2008) Facebook no doubt is a social network sites that allow the users to deliver shared, relevant information, a place for exchanging ideas and thus had fulfill many of the promises of civic journalism.

Besides, it is crucial for the media to help citizens to stay connected with the society especially in the time of damaged credibility in public institutions according to Rutigliano (2007) (cited by Kee et al., 2008). Ultimately, it had assist journalist and traditional news organizations in gaining lessons on how to reach individuals, especially young adults from the social network.

Social networking can reduce relationships’ gap for those who are being apart by distances according to Thompson (2009). It brings them together despite the physical separation.

According to Vocus (n.d), it stated that one have to understand the potential of social media can bring. Social media had removed the possible barriers one company has towards its audience, it aid in spreading the intended information, generate sales leads, gauge customers satisfaction and increase brand recognition. The social media has indeed provides the tools and tactics for a company and prove its value.

Facebook help in creating social awareness among people, Facebook users will ultimately been updated by various news posted by their friends, making them aware of the incidents that had recently occurred and as well provide them a better understanding of things (Sagi, 2011) Facebook has help in increasing environment sensitivity among the user despite their hectic lifestyle.

Negative Impact

According to the thesis “Uncovering the Social Impacts of Facebook on a College Campus” by Boogart (2004), there is a significance relationship between heavy use of Facebook and lower GPA among students.

Although Facebook was created to have a positive impact in person to person communication, studies show that it could have a harmful effect according to Thompson (2009). Facebook was first been used as a way for liked minded students to share their life experiences and keep in touch. But it too had reduced the real connection in reality, the interpersonal connection between people. It is like an escapist experience and had displaced the real interaction to an alternative cyber world.

According to the journal “Facebook Games by Design have a Negative Social Impact” (The University of Melbourne, 2010), Facebook games had causes problems such as addiction to internet, decreasing sociability, mis-education to children and fall in productivity among people. Facebook games can lead players distracted and addicted as well. The addiction should be categorized as mental disorder to the Diagnostic and Statistical Manual of Mental Disorders according to Block J. (2008) (cited by The University of Melbourne, 2010), an American Journal of Psychiatry editorial writer.

In addition, Facebook games are easy to be accessible in nature. It runs almost in every modern computer which allows nearly everyone to be able to play them. The game itself is designed to be addictive too; the in-game award system had leaded the players to further enhance their desire to accomplish the goal of the game (The University of Melbourne, 2010). Besides the game have a negative impact on the young. Facebook games such as Texas Poker has indeed encouraged the young user to be involved in gambling.

Moreover, according to the journal “Facebook Games by Design have a Negative Social Impact” (The University of Melbourne, 2010), it stated that Facebook games causes degradation in academic achievement among the students. Students spend more time on Facebook rather than revising their works. Besides, it lowers the work productivity of office workers as well.

Williams and Gulati (2008) argue that using Facebook to hold an online campaign is a non-significant, near zero impact on vote share only due to those candidates who made little effort to cultivate a social network presence and put together it into their campaign strategy. Additionally, campaign supporter in Facebook is hardly to be defined. The amount of the supporter is either the real things or just a hoax created by the candidates themselves.

Theory Applied

Uses and Gratifications Theory

Uses and Gratifications Theory applied in this research. As cited in Chasse and Jenkins (n.d, p2), West and Turner (2005) stated that Uses and Gratifications Theory implies that “people actively seek out specific media and specific content to generate specific gratifications” and also to explain people’s involvement and need for media. According to the article “The role of theories in Uses and Gratifications studies (1979)”, by Blumer, there are six types of audience activity (Bulmer, 1979)

Firstly, people tend to use media to accomplish their task, like for example in this case; most of our respondents use Facebook to connect to their long lost friends.

The second type is intentionality, where use of media is decided by the motive. Respondents’ use of Facebook was difference from one another; their intention behind their impulse can be listed from the purpose of entertainment to socialize as well as other reason.

Thirdly, selectivity, whereby the choice of media reflects the existing interest. People can always choose other media besides Facebook to stay in connect with their friends, there must be significance behind every action they did. The attachment of public towards Facebook too has its own explanation.

Fourthly is to influence, whereby people create own meaning from media content. Different media serve different purposes and effect and people in return decide what to absorb from the media and how the media will influence them. Facebook is convenient in every ways, whether to connect friends from a thousand miles away or to exchange pictures taken shortly ago. The beneficial of Facebook ultimately will have its own meaning to different individuals.

Fifthly, fifth is activity, which means what people use media for. Like for instance, people listens to radio for the timeliness news and hits music, watch television for TV drama, and etc.

Lastly, the activeness, meaning the freedom of the audience in involving. Public involved themselves more in forum or chat room that is found in the internet for the reason that the amount of freedom of participating in such activity is being granted.

Media System Dependency Theory

According to Maxian (2009), Media System Dependency theory in microscopic level is defined as “a relationship in which the capacity of individuals to attain their goals is contingent upon the information resources of the media system.” Those information resources can be categorized as the ability to create and gather, process, and disseminate information.

While in the macro level, the theory is define as the social perspective if the greater increasing people become more dependent on the media, the impact of the media will rise and role of media in society will become more central. Facebook had already been globalization and many of us have been active users itself. The impact of Facebook in our lives can be seen in many prospects. Maxian also proposed that the media is very powerful as it controls every information or resources that needed by the people to achieve their informational goals.

As cited in Maxian (2009), Ball-Rokeach (1998) described that individuals are assumed to function along three main dimensions in the relation between individual and media dependency; which are goal, referent, and intensity.

The goal dimension refers to the motivation of people to achieve their informational goals through the information that provided by the media. For example, people will seek information from media to reduce their stress, entertainment, as well as for self and social understanding. They also use media information as a guide on their daily interaction and situation faced. Facebook serves as many purposes for the public. Many of them claimed that they use Facebook for entertainment, to socialize, to kill times and to escape reality. Those intentions will further push the user to continue to stick on Facebook.

Next will be the referent. The referent dimension refers to the number of media that the people used at once to accomplish his/her goal. For example, a person who is seeking information for his/her assignment will use various media such as newspapers, television, Internet and etc to complete his/her task.

Finally, the intensity dimension refers to the extent of intensity of media used by an individual to achieve informational goal. For instances, if a media source such as the Internet provide the most and best information for people, the dependency of the Internet will be more intense in terms or achieving that informational goal. This can be explained the phenomenon of the addiction towards Facebook by the majority. People had found out what they can do in Facebook and how it had satisfied them in serving their own purpose, thus had strengthen their independency towards Facebook.

Research Design

Quantitative research method will be employed in this study. There are two major types of surveys, descriptive and analytical (Wimmer & Dominick, 2006) a descriptive survey is to explain current conditions and attitudes at the moment, while the analytical survey is to explain why the situation exists. Apparently, descriptive survey will be fully utilized in this study to check on the awareness of the public about the double-edged effect of Facebook.

The reason Quantitative research method is being employed in this study is because it is handy in every aspects to the researcher. Firstly, it helps the researchers to determine the exact situation; roots of the problem and behavior pattern can be defined. Usage of Facebook and public opinion on it can be well evaluated through quantitative research.

Secondly, its economic friendly since the researcher do not have the sufficient amount of money to conduct other research method. Since this is a University assignment, researcher had no financial support to conduct the research. As such, quantitative is the most convenient and cost effective way to help the researcher her findings.

Thirdly, quantitative research method is able to provide a better insight of the situation examined, again, usage of Facebook, media preferences of the public, demographics etc can be collected too.

Fourthly, it is convenient and flexible; it can be conduct everywhere within the target compound. The researcher does not have to undergo complicated situation yet still able to get the accurate outcome.

And lastly, there is numerous information that had already exist available for the researcher to use as primary sources or secondary sources. The positive and negative of social network had been hotly debated by various journalist on the past decade, this as well had save the researcher a lot time and had provide the researcher a better guidelines to finish the assignment.

Population and Sampling

According to Wimmer & Dominick (2006), population is a group of subjects, variables, concepts, or phenomena; an entire class or group is investigated in some cases. The population is this study will be the staff and students of Utar Kampar.

It is impossible to cover the whole population of Utar Kampar, thus a sample will be selected. Sample is a subset of the population that is representative of the entire population (Wimmer & Dominick, 2006). Through this study, 100 respondents age 18 and above will be randomly selected to complete the questionnaires. Location will be targeted at several hotspots in Utar: Block C, Block G, Cafeteria and other crowded places, not to forget via online too.

The reason staff and students are being chosen as the targeted respondents is because they are familiar with the social networking site, Facebook and their lifestyle, media consumption are aligned with the study. Thus, by choosing the right criteria, the researcher believes that it will aid in finding the accurate outcomes of this study.

Instrumentation

Questionnaire will be utilized in this study by the researcher due to the excessive amount of questions can be asked and answer can be obtained on spot from a huge sample.

The survey in this case study consists of three types of questions: (1) open-ended questions, which require the respondents to voice out their own opinion, give them freedom in answering questions and an opportunity to provide in-depth responses; (2) close-ended questions, a list of answer will be provided in which respondents can choose from; (3) likert-scale question, where by the respondent can be able to choose their answer based on the extent of their opinion, for example “strongly agree”, “neutral”, or “disagree.”

Overall, the survey consists of five parts. The first part will be the demographic part in which require the respondents to answer some basic information such as gender, age and education background.

The second part will be answering the research question: what is the role of media in creating awareness on the impact if Facebook, where by the researcher can get in-depth information about the respondents’ perception of media.

The third part is to investigate the reason people attached to Facebook and will also question them about the impact of Facebook brings.

The forth part consists of two sections, each of them study about positive impact and negative impact of Facebook. Respondents who agreed with the statement that there’s a double sided effect of Facebook in our lives will have to answer both section, and those who choose either one of the impact will be responsible solely to the chosen section. By categorizing the respondents in such ways, the researcher will have a better understanding of the perceptions of the majority towards the impacts of Facebook in their lives.

The fifth part will be asking the respondents about their opinion alerting the public about the impact of Facebook and which media will yield the wildest effect in creating public’s awareness of the relative issues.

Data Collection

Survey will be collected at once after the respondents had completed it. There will be a direct interaction between the interviewer and interviewee. As such, the interviewer can be able to help the respondents if they do no understand the survey.

The researcher will be using two kinds of methods to collect the surveys, mall interviews (on-spot interviews) and internet surveys. Mall interviews are a quick and inexpensive way to collect personal information (Wimmer & Dominick, 2006). And of course the mall mentioned is referring to the crowded places found within the University’s compound during peak hours.

The second method is the Internet surveys. According to Wimmer & Dominick (2006), the process of internet survey is very simple, surveys will send out and completed via email. This method undeniable saves cost and is environment friendly.

The survey had been completed in one week time, starting from 14th March 2011 to 20th March 2011.

Data Analysis

According to Wimmer & Dominick (2006), statistics are mathematical methods to collect, organize, summarize, and analyze data. The researcher will use descriptive statistics to convert the huge amount of data into a much understandable and meaningful method. The researcher will present the data by using pie chart and bar chart, together with details description and explanation.

Figure 2.8 denotes that 85% of respondents stated that the information/news from the media will certainly affect their attitude and behavior and had given their reason about it. They said it was because the message media brings is very persuasive and always is the truth of fact. Moreover, the media helped them to gain a better insight of things and lead them to wider perceptions of seeing things.

Besides, the succeed of Shahzad and Paquistani (2008) using media to influence the attitude of the authority has it solid astand for the above statement.

The remaining 15% of respondents’ behavior or attitude will not change by the media was because they found out the media is not a reliable source and some of them actually did not care about it.

In sum, A lot of the information gained from the media inevitably will influence publics’ behavior.

According to figure 3.0, 24% of respondents used facebook for both entertainment purpose and socialize. Another 21% claimed that the reason they used Facebook was because of the peer’s pressure while 16% was to kill times. Remaining 15% of the respondents affirmed that they were merely following the trend.

West and Turner (2005) (cited by Chasse and Jenkins, n.d,) stated that Uses and Gratifications Theory implies that “people actively seek out specific media and specific content to generate specific gratifications” and also to explain people’s involvement and need for media. Facebook served different purpose for different people.

Majority of the respondents had claimed that they used Facebook mainly for entertainment and to socialize. Again, the hectic lifestyle had make Facebook as an entertainment spread like a wildfire among the public. Insufficient time to socialize with one another can now be solved through Facebook.

Discussion and Conclusion

Discussion

The researcher has been able to identify RQ1: the role of media in creating awareness. By referring to the findings (Q4), it can be presumed that most people use media for the purpose of entertainment, followed by surveillance. This is quite a norm to the society as people nowadays are busy in fulfilling their material life and thus do not have the ample time to do other things. Media, in such way has provided the public the best solution to be entertained and so as to be updated on various things. As such, people are gradually influenced by the media (Shahzad and Paquistani, 2008, p5).

According to the findings of question 5, the role of media in creating awareness can be by constantly educating the public about relative issues, in this case is about the awareness of the public on the double sided effect of Facebook, is believes to have the widest effect in alerting the public. This shows that media have the power to influence and shape the public opinion. Shahzad and Paquistani (2008, p.2), also argue that media is essential in creating awareness to shape publics’ perception and opinion.

Covering relative news to aware the public is also another way for the media to influence public. Truth fact is most likely credible and thus will motivate public to change. As stated by Shahzad and Paquistani (2008, p. 5), media coverage does influence social flow. Their coverage of the poor quality of water and sanitation facilities in public hospitals of twin cities of Rawalpindi and Islamabad are able to attract federal health minister and the NGO to take action and had solved the above problem.

Findings from question 6 states that most people found news from media is trustworthy and are aware of the latest issues broadcasted by the media (Q8). Such dependence to the media had prove that media plays an important role in them. Media has indeed become an necessity in the society (Dorji ,n.d, p.1).

The level of public’s dependency towards media and the effect of it to the society found in research question 11, which is high in dependency has nevertheless establishes the claims of media dependency theory. The level of dependency of media will directly affect the role of the media in the society, the greater they rely on it, the superior media will be (Maxian , 2009).

Next, the researcher is able to classify answers of RQII: What is the use of Facebook in part III of the questionnaire. People used Facebook for different purposes, such as ranking from the highest to the lowest according to the findings of question 13: entertainment, to socialize, to blend in the society’s trend and etc. User and gratification theory states that people seek specific media to generate specific gratification and thus explain their use and need for media (Chasse and Jenkins, n.d, p.2).

Majority agree that Facebook has become a necessity in peoples’ lives (Q5). Again, media dependency theory stated that the superior the media get, the powerful it will have its impact on the society (Maxian, 2009), which had been aligned with the findings. Many had agreed with the statement that there’s a double sided effect of facebook in our lives (Q18).

Part IV of the questionnaire is been designed to find out the answer for RQIII: What is the impact of Facebook. Both positive and negative impact will be scrutinized thoroughly. The potential of social media can bring is wide and effective in publicity according to Vocus (n.d), hence it is in line with the researcher’s findings which the respondents agree that Facebook aid in publicity (Q21). Facebook is able to reach a larger market regardless the location and financial. It is the most cost-effective to promote one’s company or product.

Many had agreed that Facebook help them to keep in touch with friends (Q20). According to Thompson (2009), social networking can help in reducing the gap between relationships, whereby many of the respondents agreed as well. Constantly keep in touch with friends and families that are being apart help smoothen a relationships and will perhaps diminish the potential conflict between them.

In addition, on the negative sides, many had agreed that Facebook causes degradation of studies (Q23). Boogart (2004) also stated that there is a significance relationship between heavy use of Facebook and lower GPA among students. Students had spent most of their time playing Facebook and therefore had neglected their studies. The Facebook game too had contributed to the above matters (The University of Melbourne, 2010).

Moreover, most of the respondents states that Facebook will induce lower work productivity (Q23). The University of Melbourne (2010) do suggest that addiction to internet, decreasing sociability will aid in the fall in productivity among people.

As for people busying building and maintaining their relationship via cyber world, the real relationship and interaction in the reality consequently will be hinder. About 22% of the respondents are consent of it (Q23). Thompson (2009) asserts that many had displaced the real interaction to an alternative cyber world.

Based on the findings in question 24, faked Id, online fraud and hacker issues had been also hotly debated by the public. People no longer feel secure while online because they never know the source of their information. Online campaign is been categorized as a non-significant compare to the real campaign (Williams and Gulati, 2008). The information and the amount of the supporters are impossible to be defined.

Majority of the respondents strongly agree that media should aware the public of the double sided effect Facebook brings. They knew the how Facebook can help in one life and to destroy one’s as well. Therefore, media such as newspaper, TV, radio, Internet campaign, etc must play their role to alert the public.

Conclusion

Assessing the importance of media in raising public’s awareness on the double-edged of Facebook is the primary objective in this study. This research also aims to determine the media role in society as well as to understand the impacts Facebook has towards the society. A total of 28 survey questions regarding public perception towards the importance of media role and the impacts of Facebook were studied. Research has shown that media is mainly serve as an entertainment, followed by surveillance, social interaction and personal identity.

Researcher found out most of the respondents agree that media is a trustworthy source and had strongly relied on them. They trusted information from the internet more than news from newspaper as they believe that newspaper was in favor of the politicians. News and information from media poses a great power since they had the ability to alter, shape and influence public decision making.

In addition, researcher is acknowledged that most of the respondents are active Facebook user and they agree that Facebook has become a necessity in peoples’ lives. They do aware of the double sided effect Facebook brings.

The researcher had obtained the percentage of various positive and negative impacts facebook had from the survey questionnaires in which had provide the researcher about the perceptions of the public towards Facebook.

Meanwhile, the researcher found out the negative news of Facebook will affect the use of Facebook among Facebook user. This shows that majority of Facebook user is still a smart user rather than a blind user.

Lastly, researcher also gained an understanding about public opinion on whether the media should alert the public about the double sided effect of Facebook. Public has come in agreement that media plays an extremely important role in creating awareness. Internet has the best effect in alerting according to the findings.

In short, this study is helpful in conveying important information on the impacts of Facebook. As a result, it has provided the researcher with a clearer understanding on the importance of media in creating awareness.

Categories
Free Essays

White and black box Project software testing

Introduction

There were different ways of testing the software, white box testing and black box testing. In the white box testing, it looks into the covers and into the details of the whole software we created enabling us to see what’s happening inside it. On the other hand, black box testing just looks into the available inputs for the software and what expected outputs are that should result from each input not concerning within the inner work of the software. So this makes a difference of the area they choose to focus on. (www.testplan.com, 7th Feb 2011)

Hence going through the black box testing and white box testing, we as a testing team thought of using black box testing as it its more appropriate for our software which involves security testing and usability testing. So being two persons in the testing team it was easy for us to divide the task. I took usability testing where as my other group mate did security testing.

As soon as we got the final copy of the working software, I checked the whole software if it works as it says or not. We first had to install a program called WampServer to run our software. I ran the software and checked the web design whether it is user friendly or not. This helps for the ease of use of the software for our customers or the users.

The index page looked like as below:

And the survey page which is the main page looked like one below:

Page 1:

Page 2:

We also have details about the data protection to make sure that this software we created is copyrighted and also mentioned stuff like information we collect from the users, IP addresses and cookies, uses made of the information, users rights, data security and access to information. We also have a page where the users or the customers will know about us.

We also have features for disabled person who have to click “high visibility” in the index page to access it. It is mainly for the people who are suffering from colour blindness or have partial visual impairment so we made the font bigger and used just three colours for the whole website and looked like below:

The above snap shot is of the first page for the disabled users. As we can see that it hasn’t got much of colour contrast and the font are bigger. We have used white and yellow text and blue as the background to avoid visual glairiness because through research we have found that the people with low vision sees this colour more comfortably.

Then we got the survey page whose layout is also similar as the above one.

This is the survey page for the disabled users. We used bigger fonts through the whole webpage and maintained a text format meaning that we have got rid of all the necessary decorations and kept it simple. Both a “Normal” and “High Visibility” web page contains exactly the same information and has the same format. The only difference is in term of graphic so that it provides better visibility.

After testing the usability of the software and modifying the software till it was perfect, it was time for us to conduct the usability testing with other user. Before using the usability testing method we need to understand what usability means. It doesn’t mean to have perfect software with no errors or having good features. It should help the customers or the users to use the software or any other products quickly and easily to gain the goal and hence accomplish the tasks. In this project, our task was to build a micro questionnaire data gather which should consist of a website and database to store the information. This website displays an introductory descriptive paragraph and displays four related questions where one of the questions is the central question and the other three are associated questions whose answers will build up a composite answer which will validate or not any information given to the real question. So the users had to go to the index page, choose which graphic they prefer to use and give answers to the questions.

I chose two methods of usability testing for our software and they are user and expert review.

User testing:

In this method of testing, I chose 5 different users. Three of them were GCSE level students and two of them were students from the university. The software was ready in my laptop to be used and then told them what they had to do with the software. I also requested them to give some feedback about the website itself. Then all of them started going through the website, looking through the layouts, fonts, colour, etc. This test was done individually in different places. Then they read the question and answered them as per their own thinking. There was one user who used glasses so I asked him to take it off n use the “high visibility” layout if he doesn’t mind and he did. The feedback was quite good as well. He described about the colours and size of the fonts which were of perfect match and even the contrast to the background were easy for them to read. They also gave feedback about the navigation of the website which was pretty much easy.

Expert Review:

In this method we involved an expert for inspecting the software. We asked him to examine the whole website and give us feed back. I took started the software and then he examined the whole software.

Feedback from user testing:

The layouts of the website were well presented.

The contrasts of the colour were chosen well.

The colours for the high visibility were chosen well.

It was user friendly.

It was easy to learn how to use it.

Navigations of the software were simple.

It has an appealing layout.

It provided objective information to the users.

Storing of data was effective n efficient.

Refrences:
http://www.testplant.com/download_files/BB_vs_WB_Testing.pdf, 7th Feb 2011
Usability inspection methods, Jakob Nelsen, Sunsoft and Robert L. Mack, IBM T.J. Watson research centre
Interaction design – beyond human computer interaction, second edition, John Wiley and Sons, Ltd
Usability Testing and Research, Carol M. Barnum, Southern Polytechnic State University

Categories
Free Essays

Critical study of software copyright and piracy in China

ABSTRACT

This study aim to demonstrate Chinese students’ attitude towards software copyright and piracy in China. This paper has selected a small group of Chinese student to evaluate their presumption and data were collected by using interview among this group. There are some factors which has a significant impact on piracy. Therefore, for finding out the validity and add extra elements in finding and discussion chapter will present new items which has a role in software piracy. Thus, this research has some limitation such as the number of sample and the matter of time. Hence, the result of this research cannot be valid.

Chapter I

Introduction

In the past few years, there has been a doubt among the Chinese software users regarding the use of pirated software from the legal point of view. In addition to software users, anyone who is involved in the software copyright and piracy issues, is confronted with this question that if the pirate act is illegal or not (Croix & Konan, 2002). Answering this question requires to consider the perception of each individual Chinese software users regarding the software copyright and piracy. Whereby, awareness of Chinese users in decision-making has a direct impact on their piracy behaviour in terms of using pirated software (Liang & Yan, 2005).

To explore how Chinese software users recognize the matter of software copyright and piracy it is crucial to understand the function of copyright and piracy protection law, In fact, copyright and piracy protection law are the subset of the intellectual property law (IP) which has been considerably enhanced in the recent years. According to a definition provided by World Intellectual Property (WIPO, 2006) “intellectual property can be anything which creates by human mind such as, inventions, literary and artistic works”. On the other side, software piracy is opposite of the copyright law in which has increased along with the extension in popularity of internet in 1999 (Katz, 2005).

Currently, China is under the World Trade Organisation (WTO) agreement, which compels China to have transparency on intellectual property protection in terms of laws, regulations, administrative rules and judicial decisions (Panitchpakdi & Clifford, 2002). Hence, this study places emphasis on Chinese student various perceptions regarding the software copyright and piracy. The next chapter will review the existing literature review and evaluate different point of views. Furthermore, it will emphasize on methodology chapter to illustrate the method of collecting data. Moreover, in finding and discussion chapter will demonstrate and analyse the information towards answering research question and finally will present conclusion.

CHAPTER II

Literature review

2.1 China`s background in copyright law

From the historical point of view, for the first time, copy right law came into existence in ancient China and the creation of copyright in China was initiated by the innovation of printing by Bi Sheng in AD 1042. Moreover, compared to European countries the technique of printing had developed centuries earlier in China (Mertha, 2005). According to Martha (2005), in 1910, the first draft of author`s right was published and a number of punishments for unapproved use were established. WIPO (2006) state thatAfter the Cultural Revolution,in 1979 China entered into a new stage of modern legal system which contained the copyright structure and was an important step to connect to the outside world. Subsequently, as WIPO (2006) assert that “China has joined the world intellectual property organization in 1980”.

Bently &Sherman (2001) explained that copyright was originally intended primarily for the protection of authors, artists and composers to provide a legal foundation for the innumerable transactions by which they are paid for their work. Croix & Konan (2002) explained that the first aim of copyright law is to provide the security for author`s right from abusing in illegal way. World intellectual property organization (WIPO, 2006) defines “copyright” as legal point of view in order to maintaining creator’s right and securing his/her “work”. In addition, the term “work” is used by intellectual property law in various aspects such as; novels, poems, plays, databases and computer programs.

Generally, copyright laws are executed diversely in different countries around the world (Marron & Steel, 2000). For instance, The European Countries and North America have tough copyright laws and enforce them determinedly. Meanwhile, there are some countries which have determined copyright laws but their courts are unwilling to enforce them. Furthermore, there are developing countries in which their principles are based on Islamic patterns and do not have adequate laws in terms of copy right ( Marron & Steel, 2000).

From worldwide viewpoint, entering China to the global network has generated a massive capability in order to share and observe information through new approaches, especially byInternet (Croix & Konan, 2002). However, in recent years, the international business society has mentioned that there has been a doubt regarding China`s malfunction towards limiting international property infringement (Mertha, 2005).

2.2 Globalization and software piracy

Bently and Sherman (2001) assert that, the original concept of copyright is surrounded by boundaries’ inside of the state, Thus, the security of copyright protection will be in danger if it operates beyond the country and goes through the national world. Therefore, the fence would be break down by the development of globalization and establish copyright as boundless subject in international trade. Consequently, developed countries realize that it is crucial to make some alteration in enforcement of copyright protection across the national borders. Therefore, due to unexpected economic growth in China, it has become the main target of whole global copyright enforcement, such as US and the European Union (Halbert, 1997).

IIPA (2006) declared that the progression of globalization transforms software copyright and piracy from internal issue to universal matter among countries. Furthermore, due to the fact that China has been faced with a huge amount of piracy, it has been constantly criticized by other countries regarding the lack of enforcement and ability to protect software copyright.

2.3 Culture and software piracy

Mum (2003) argued that cultural differences is one of the most significant aspects that should be considered in China`s software copyright and piracy and has a main role on development copyright in China. From western perspective, individual’s freedom and benefits often put emphasis on public shared benefits. In contrast, as a traditional Chinese point of view, individuals are part of society and are obliged to present their creation and innovation to the community (Mum, 2003). Considering the two mentioned viewpoints, it can state that the eastern minds are totally different from western minds, regarding that in western society intellectual theft is not appreciated, meanwhile, it is a new concept to many Chinese. In addition, Yu (2001) pointed out that in traditional Chinese culture copying regarded as honourable and necessary fact.

Husted (2000) stated that the rate of piracy in China has a great connection with cultural dichotomy of individualism and collectivism. In addition, Marron and Steel`s (2000) found out that countries in which their principles is based on individualistic culture have a lower piracy rates comparing to the countries which have collectivistic culture. According to Wang, Zhang, and Ouyang (2005a) the correlation between pirated software purchasing and cultural subject in China is more expected to be engaged in the theft of software programs or sharing intellectual property. In fact, collectivist culture can be one of the great factors that might be the cause of the prevalence of software piracy in China.

2.4 The Chinese government, Communist ideology in software piracy

Croix & Konan (2002) argued that China`s government has been considerably attempting to change the legislation and policy making process in terms of prevent pirating. For instance, China government closed 9 factories from 18 which were producing pirated software’s and presenting illegal Cd`s in domestic market. However, despite the considerable reforms by China government regarding the implement of copyright enforcement, there are some domestic factors which make the matter worse (Mertha, 2005).

Lu and Weber (2008) explored that China government should consider about the economical and political environments of public and private dimensions of software copyright to cover external and internal challenges. In addition, Communism philosophy in which its main principal is based on everything belongs to society and people, rather than private owners has been existed in China since 1949. Consequently, Communist thought of copyright are fundamentally well-matched with traditional Chinese culture, because they support each other to shape Chinese people`s attitude in the direction of decreasing copyright protection.

Overall, the literature review includes variety of research areas and identifies a group of structural factors relating to software copyright and piracy in China. Meanwhile, the literature review has some limitation, for example, it uncovers the behaviour element which is crucial in act of piracy, but it will be covered in finding and discussion chapter by interviewing from Chinese student.

CHAPTER III

Methodology

Generally speaking, the nature of human beings has been always concerned about what is happening surround them. In order to understand their surroundings they began to search regarding their requirements which at least named research. According to Cohen et al (2007) research is a process of planning, executing and investing in terms to find answer to our specific questions. In addition, getting reliable answers needs an investigation in a systematic manner and will be easier for reader to understand. Achieving these ends requires research methods.

In this study, the research philosophy is examined by interpretivism. Bryman& Bell (2003) defined it as an epistemological position that enquires the social scientist to grasp the subjective meaning of social action. Furthermore, the inductive approach has undertaken to this study in terms of understand the nature of problem, by which, it enables researcher to take more information about the research. (Bryman & Bell, 2003). Evaluating about philosophy and approach, now it should consider that the research method is done by mono method meaning that is qualitative and by the nature of it, has a great advantages for this research. It was chosen because the research approach was based on inductive methods and it requires an exploration of detailed in depth about data. Denzin & Lincoln (2000) believed that using this method able researcher to explain, translate and otherwise come to the terms with the meaning.

This study will carry out both primary and secondary research. The primary research will be examined by doing an interview based on semi-structured type from a group of Chinese student which will be within an age group of 22-28. The semi-structured interview was designed by some relevant questions in order to answer the research question, further information will be on (appendix 1). An interview has chosen as a method for primary research for the reason that it is one of the methods by which the human world may be explored, although it is the world of beliefs and meanings, not of actions that is clarified by interview research. Bryman & Bell (2003) pointed out that interviewing provide a wide range of data collection. Thus, it helps researcher to find out how people regard situations from their point of view.

In this research, because the emphasize is basically based on the area of intellectual property law and as far as everyone are concerned, this field is extremely complicated and cannot be expressed in closed questions. Hence, semi-structured interviews have chosen because it has a great benefit for conducting this research. Moreover, it is based on an open-ended question. In addition, Bryman & Bell (2003) argued that this approach can be used to gain different comments and offers the interviewer the chance to investigate an issue or service. In addition, it gives the interviewee an opportunity to share general views or their opinions in details. Apart from the benefits of this method, it has some disadvantages such as: it requires interviewing skill and need to have the skill to analyse the data. Furthermore, it should be done on sufficient group of people in order to make general comparison. Moreover, it is really time consuming and researcher should be able to ensure confidentially ( Saunders et al, 2003).

It is crucial to mentioned that ethical concerns will emerge as the research planning starts. As Blumberg et al cited in Saunders et al (2007) argued that “ethics refers to the moral principles, norms or standards of behaviour that guide moral choices about our relationship with other”. Furthermore, in order to ensure confidentiality, this research will only consider about the age of the Chinese students and will not emphasise on their name or their institute (see appendix 2).

To answer the research question, Chinese student attitude is examined in order to explore how they perceive the issue of software copyright and piracy and with the aim of give the sense of security to interviewee at the first of interview it will mention that the process will be recorded to ensure the crucial information is not omitted from the note taking.

Overall, in this study by using qualitative approach with semi-structured interview will prepare a suitable occasion in terms of collecting a great deal of information from Chinese student regarding their point of view about software copyright and piracy in their country. Furthermore, it is expected that 10 interviews will be conducted and the sample will just measured by their age and the interview will be carried out in person by the researcher on site at the University Of Sheffield. Moreover, information from interviews will be classified into coding and categories, which can be derived from research question and the literature.

CHAPTER IV

Finding and Discussion

4.1 Finding

This chapter aims to answer the research question which was about Chinese student attitude towards software copyright and piracy. To answer this question, this research is carried out by choosing a smaller society of Chinese users, namely Chinese students, in which their perception in terms of software copyrights and piracy in China will be examined. As it mentioned in methodology, in order to analyze the data, this research will categorize and coding the data which has conducted from Chinese student attitude, then it will examine the findings and discussion and finally will demonstrate the conclusion.

The findings will categorized and coded by what interviewees mentioned regarding software copyright and piracy, for example, some of people express that the price of the copyright software`s are too expensive. Meanwhile, others pointed out that the general income of people in China cannot afford copyright products and on quarter of participants said that copyright products actual value does not deserve that high price. Therefore, this category named as cost with three subcategorized; price, income, value and coded as a software products. Secondly, half of the interviewee mentioned that pirated software`s are accessible and it can used them without any limitation, whereby, original software`s has the restriction of usage. Moreover, this opinion categorized by usability and accessibility of software products.

Thirdly, from findings it can find out 9 of 10 participants disagreed with the culture as an element which has an impact on a software copyright and piracy. By contrast, in literature review in culture and software piracy some author`s were explained that culture has a considerable relation with software copyright and piracy. Furthermore, some interviewees expressed that some issues, for example, education or gaining knowledge should not consider as piracy, additionally, using software`s for a personal need without any intention of using in illegal way is not piracy. Finally, the item that generally accepted by applicant was China`s government, which they state that the main power who can enforce the copyright law and prevent piracy is government. Thus the first section was categorized by social development and cultural affect and the later section was grouped as Chinese government, which these two parts coded as a china development. To clarify the structure the complete coding frame is established in (Appendix 3).

4.1.1 Software products: cost, usability and accessibility

Cost, usability and accessibility play critical roles in decision-making whether to use pirated software or not among Chinese student attitudes. Among all of discussions cost is consistently mentioned as a reason to choose pirated software. Therefore, three subject matter illustrates from which will evaluate by participants viewpoints. Generally speaking, first of all, was the cost to buy copyright software. As they state that buying original WINDOWS XP in China is really expensive. Secondly, because of high prices for software products most of the people especially students and low-income earners are not able to afford the price.

Thirdly, they cited that the copyright software`s are not valuable comparing to their price, for instance, by installing WINDOWS XP, it requires anti-virus as well to protect is and this will cost an extra money and pirated software`s functions are same as original one`s. Thus, it is not valuable. It can assume that there is another side to disagree with their thought in terms of cost, if comparing the price of software product with other spending. Since they can tolerate with others spending, there is no reason to reject software prices.

4.1.2 China development: Social development and Chinese government

As it mentioned in literature review chapter there were two factors which also found by evaluatingChinese student presumption toward software copyright and piracy which is constantly, government policy and culture. Furthermore, participant conflicting views about those items and stating short words or sentences; it cannot provide valuable information about significant issues such as government policy and cultural affect. Therefore, in order to generalize their viewpoint it can state that, after Cultural Revolution in China and joining to the World intellectual property organization there has been a significant changes in terms of education, technology and the level of science through out of China (WIPO, 2006).

Generally speaking, interviewees pointed out that the China government has the main power in order to prevent software piracy and change the policy towards enforcement of copyright law. Moreover, it can maintain that in spite of the power to enforce, why Chinese government does not really want to stop piracy. Furthermore, participants discussed that because of the matter of population and as China is among developing countries, government and authorities really feels the lack of knowledge and because they want to increase their literacy and awareness, they are not as strict as developed countries.

4.2 Discussion

With the establishment of coding frame, this study will applies axial coding to make a connection between categories and sub categories. First, the participant’s perception develops from the issue which coded as software products with three subcategories. In the category of software product user`s resistance focused on copyright software`s high cost and poor usability and accessibility. In contrast, user`s are likely to use pirated software which low cost and good usability and accessibility. On the other hand, the interviewee which protects software copyright law, refuse to accept cost and excuses for piracy use. Second, in the category of China development which was generally analysed, it can mention that government policy in China is trying to educated and boosts the knowledge within the people, but this cannot be the reason to use piracy software’s or make author`s work invaluable.

In other world, participants believe that Chinese government does not really want to limit piracy. Form their discussion it can find out government has a great interest in piracy regarding market economy.

This study has found that in general Chinese student attitude towards software copyright and piracy mainly suffers from the phenomena which so-called Cost and the accessibility of pirated software’s. Furthermore, this paper has some limitation which will decrease the validity of this research, such as; the number of sample was not that much great to evaluate and examine the other perceptions, another item was the limitation of time, which was considerable for this research. Furthermore, it is suggested that copyright owners should lower down their retail prices of their products to the lower degree, by which, Chinese user`s be able to afford it.

CHAPTER VI

5 CONCLUSION

This study set out to examine how Chinese student are aware of software copyright and piracy in China. A small sample of Chinese student was selected to illustrate extra elements a part form literature review, which has significant role on using pirated software’s. Furthermore, the data were analyzed by Bryman & Bell (2003). Overall in can conclude that, the fact that China started to play a more and more important role in today’s world and its development cannot be easily stopped or reversed. Therefore, like developed countries, China should redesign and change the software copyright law in order to minimize the amount of piracy in world.

References:
Bently, L, & Sherman, B. (2001). Intellectual property law. New York: Oxford

University Press.

Blaxter,L, & Hughes, C, Tight, M. (2001). How to research. (2 edn). Buckingham: Open University Press.
Bryman,A, & Bell, E. (2003). Business Research Methods. New York: Oxford UniversityPress.
Business Software Alliance. (2004). BSA and IDC global software piracy study. Retrieved January 28, 2005, from http://www.bsa.org/China/globalstudy
Cohen, L, & Manion, L, & Morrison, K. (2007). Research Methodes In Education. ( 6 edn). London: Routledge.
Croix, S. J., & Konan, D. E. (2002). Intellectual property rights in China: The changing political economy of Chinese-American interests. The World Economy, 25, 759-788.
Denzin, N. K., & Lincoln, Y. S. (Eds.). (2000). Handbook of qualitative research (2nd ed.). Thousand Oaks, CA: Sage.
Gubrium, J. F., & Holstein, J. A. (1997). The new language of qualitative method. New York: Oxford University Press.
Halbert, D. (1997). Intellectual property piracy: The narrative construction of deviance. International Journal for Semiotics of Law, X (28), 55-78.
International Intellectual Property Alliance. (2006). The 2006 special 301 report: People’s Republic of China. Retrieved August 28, 2006, from IIPA Web site: http://www.iipa.com/rbc/2006/2006SPEC301PRC.pdf
Katz, A. (2005). A network effects perspective: On software piracy. University of Toronto Law Journal, 55, 155-160.
Lindlof, T. R., & Taylor, B. C. (2002). Qualitative communication research methods (2nd ed.). London: Sage.
Lu, J., & Weber, I. (2008). Chinese government and software copyright: Manipulating the boundaries between public and private. International Journal of Communication, 1, 81-99.
Marron, D. B., & Steel, D. G. (2000). Which countries protect intellectual propertyThe case of software piracy. Economic Inquiry, 38 (2), 159-74.
Mertha, A. (2005). The politics of piracy: Intellectual property in contemporary China. Ithaca, NY: Cornell University Press.
Mum, S. H. (2003). A new approach to U. S. copyright policy against piracy in China. Symposium conducted at the 53rd Annual Convention of the International Communication Association, San Diego, California, United States.
Nicol, C. (Ed.). (2003). ICT policy: A beginner’s handbook. Johannesburg, South Africa: Association for Progressive Communication.
Panitchpakdi, S., & Clifford, M. L. (2002). China and the WTO: Changing China, changing world trade. Singapore: John Wiley & Sons (Asia).
Saunders, s. Lewis, P. And Thornhill, A. (2003). Research Methods for Business Students. Third edition. Pearson education limited.
Wang, F., Zhang, H. & Ouyang, M. (2005a). Software piracy and ethical decision making behaviour of Chinese consumers. Journal of Comparative International Management, 8(2), 43-56.
Wang, F., Zhang, H., Zang, H. & Ouyang, M. (2005b). Purchasing pirated software: An initial examination of Chinese consumers. Journal of Consumer Marketing, 22(6),340-51
World Intellectual Property Organization. (n.d.). Copyright and related rights. Retrieved October 14, 2006, from World Intellectual Property Organization Web site: http://www.wipo.int/about-ip/en/copyright.html
Yu, P.K. (2001). Piracy, prejudice, and perspectives: An attempt to use Shakespeare to reconfigure the U. S.-China intellectual property debate. Working Paper Series, 38, Jacob Burns Institute for Advanced Legal Studies. Retrieved,October13,2006fromhttp://papers.ssrn.com/paper.taf?abstract_id=262530

Categories
Free Essays

Schedule a layout for flexible manufacturing layout (FMS) using the arena software

Chapter One:
Introduction

What is Flexible Manufacturing System (FMS)?

Flexible Manufacturing System (FMS) is defined as the flexibility of the manufacturing line or process in order to archive the aim to shorten the lead time to produce a product so that the product can be delivered on time to the customer and also can save cost. It has to be approachable so that the results and effects can be seen and useful for manufacturing line.

An Industrial Flexible Manufacturing System (FMS) consists of robot, Computer-controlled Machines, Numerical controlled machines (CNC), Instrumentation devices, computers, sensors. The use of robots in the section of manufacturing industries provides a variety of benefits ranging from high utilization to high volume of productivity. Each robotic cell will be located along a material handling system such as a conveyor or automatic guided vehicle. The production of each part or work-piece will require a different combination of manufacturing nodes. The movement of parts from one node to another is done through the material handling system. At the end of part processing, the finished parts will be routed to an automatic inspection node, and subsequently unloaded from the Flexible Manufacturing System. They provide better efficiency, flexibility and adaptability which are lacking in traditional manufacturing systems. FMS are designed to combine the advantages of mass production systems (efficiency) and job-shops (flexibility) in one system. (Tunali 1995)

The reason why FMS is very powerful is because of its ability to produce different types of quality products in any order with small-batch sizes without the time consuming changing machine setups. The benefits and drawbacks of implementing FMS is shown in table 1. Although large investment, long planning, development time and automated controller like CNC machines are required, most manufacturers prefer attempt to implement FMS to compete with other manufacturers. Other operational objectives such as the maximization of flexibility, sustainability, reactivity (or the ability to handle contingencies), availability and productivity should also be taken into account in particular for FMS designed to do batch jobs, small and medium-sized series in addition to mass production volumes. Flexibility is a particular important design objective implying that the same production line can be used for different products, either sequentially or simultaneously without major transformation costs.

Benefits

Drawbacks

Reduction in labour costsVery expensive
Requires less spaceComplicated manufacturing system
Increases efficiencyPre planning activity is substantial
Increases productivityAdaption of product changes is limited
Improves the quality of products
Manufacturing lead time is less
Reduces work in progress inventory.

Table 2: Benefits and drawbacks of FMS

What is Simulation?

Simulation represents the physical processes of systems on a virtual computer model where the behaviour of such a model resembles as much as possible for the real scenario. Simulation is a very useful tool with increasing importance in the current advanced industrial world. Simulation refers to a broad collection of methods and applications that virtually imitate real life situations, or those which are yet to be real. The more accurate and effective a simulation model is, the more realistic are the results obtained and predictions concluded from that specific simulation model.

In fact, “simulation” can be an extremely general term since the idea applies across many fields, industries, and applications. These days, simulation is more popular and powerful then ever since computers and software are better than ever. Computer simulation deals with models of systems. A system is a facility or process, either actual or planned, such as:

i) A manufacturing plant with machine, people, transport devices, conveyor belts and storage space.

ii) A bank with different customers, servers, and facilities like teller windows, automated teller machine (ATM), load desks, and safety deposit boxes.

iii) An airport with departing passengers checking in, going through security, going to the departure gate, and boarding; departing flight contending for push back tugs and runway slots; arriving flights contending for runways, gates, and arrival crew; arriving passengers moving to baggage claim and waiting for their bags; and the baggage-handling system dealing with delays, security issues, and equipment failure.

iv) An emergency facility in a hospital, including personnel, rooms, equipment, supplies, and patient transport.

v) A central insurance claims office where a lot of paperwork is received, reviewed, copied, filed, and mailed by people and machines and etc.

Why use Simulation?

In an effort to reduce costs and time consumption, simulation is one of the most powerful analysis tools available for the design and operation of complex processes or systems. This is because a computer simulation can provide the result on how effective a machine can run without the need of high capital investment and long time consumption to build a actual model on the floor plan by just getting the same results. Weaknesses and problems that may occur in the workstation such as material handling, idle of machine, bottleneck situation can be showed by using the simulation. In addition, the improvement of the production layout can be easily done from the simulation output showed in meeting the operating target. Besides that, simulation also helps reduce costs, avoid catastrophes tragedy and improve performance of the system. Furthermore, to make changes of a manufacturing plant in real life is very expensive and performance after the particular changes is not guarantee. Hence, it is always better to simulate the changes and compare the results before implementing it.

Expensive equipment and complicated designs can be modelled using computer software to detect any inconsistency or possible failure modes. This reduces the costs associated significantly as it helps avoid or reduce the expensive and potentially wasted cost of bad designs or wrong equipment. An example to illustrate this would be the complex simulation models created by aviation industries such as Airbus or Boeing. The sustainability and life time of a plane can be modelled by using simulation in order to evaluate the fuselage, performance of engine and other part with different environment or situation.

In addition to this, some real time product trials might be impossible as they would consume the single possible use of such products. For example, a bomb or missile can only be used once, and as a result, it won’t be possible to test every single product of such type by trialling it. Simulation plays an important key role here in modelling and simulating the effect and influence of such products while avoiding the destructive and expensive trials. Furthermore, simulation can be used to improve the current process of a system. In other words, it might be possible to optimize and increase the efficiency of an already running system by implementing some changes suggested by engineers, managers, operators or any personnel involved.

Having obtained an accurate model, those suggested changes can be initially incorporated in the model to investigate and analyze their consequences and whether they would produce the desired effect or not. Upon validation of the results, an educated decision, backed up by facts, can be taken. Therefore simulation is a tool that can be used by management to aid decision-making especially in costly and heavily investments involved.

The other goals of the simulation system are to simulate different production tasks on a given FMS and finally to facilitate the evaluation and comparison of different FMS designs for the same tasks. This last target requires to build up several, new simulation models (George L. KOVACS 1997). One of the most challenging issues faced by today’s manufacturing industry is heavy global competition. In order to compete in an international market, the manufacturers have to produce varieties of products rapidly and flexibly in order to meet the ever increasing market demand

Project Scope

The purpose of this project is to develop and model a Flexible Manufacturing System (FMS) layout using ARENA software. The author has to develop a Flexible Manufacturing System and will be able to produce simulations for the different scheduling scenarios. To start of modelling a Flexible Manufacturing System in ARENA software, the author have to put a lot efforts in research through different kinds of mass media like internet, journals, magazines, case study to understand the fundamental concept and technique of FMS. After researches, the author has to build and simulates the model in ARENA software. From the simulation result, the author has to analyze output and recommended it. Last but not least, the model results will be collected and presented in the project report.

Project Aim

The aim of this project is to adopt an existing FMS layout and identify the problem or weaknesses in it and make improvement. In order to do that, author has found a piece of journal which contains an existing FMS layout provided with the route and processing time for each parts and components respectively. These informations will be used to generate the simulation in ARENA to monitor its performance such as the total processing time or waiting time and make improvement to it.

Learning ARENA simulation Software

After few weeks of reading and learning for ARENA simulation software, the author had understood the concept and methodology of simulation using ARENA. In addition, the author had absorbed basic project planning and analysis ideas along with the modelling concepts, which how actual simulation projects ought to proceed. Besides that, the author had familiar with the icons and object about which to used and knew how to generate the animation according to the simulation. Furthermore, author had learnt the varieties of expression or formula such as normal distribution, exponential, triangular, discrete, Poisson distribution.

Project Objectives

The general objectives of this title is to schedule a layout for flexible manufacturing layout (FMS) using the arena software. The layout must be able to achieve and match the FMS requirement.

The main objective can be divided into sub-objective as stated below:

To prepare a literature review and understanding of the fundamental concepts and techniques used for the Flexible Manufacturing System (FMS).
To learn ARENA software in order to simulate the FMS model.
To select a suitable FMS layout to model.
To plan and develop the simulation of the FMS model.
To run the model for different data, arrangement, and also to view and improve the efficiency and effectiveness FMS model.
To analyze the results obtain from the FMS model.
Reproduce the FMS model for improvement.
To re-analyze and finalize the findings and conclusions.

Report Structure

This report divided into seven chapters, reference list, and appendices. The seven chapters consist of introduction, literature review, experimental technique, results and discussion, conclusion, and recommendations for further work.

Chapter 1 Introduction, the author introduces what this project is all about, objectives of this project and also the organization of the dissertation.

Chapter 2 Literature review, this chapter explains on what is FMS, the history of FMS, various types of FMS, components of FMS, the benefits and limitations of FMS. The process and examples of FMS application is also included.

Chapter 3, highlights on simulation and the ARENA software. The advantages and disadvantages of simulation are discussed in this chapter. The requirement of simulation in manufacturing environment is also included.

Chapter 4, this chapter focuses on how the simulation of the model is being built by using ARENA software. The input parameters for the simulation run and model’s features are included.

Chapter 5, this chapter is all about the results analysis that is generated from simulation model that have been built in ARENA simulation software. The three scenarios results are then compared.

Chapter 6, this is the final chapter of this project where the author will discuss about the problems that are encountered during the simulation. Other than that, the author will also give the conclusion about the whole project and give recommendations for future work.

Chapter Two:

Literature Review

Chapter two aims to reflect on the some topics related to simulation and lean manufacturing which have been pioneered by previous academics and industrialists. It covers the, seven sources of waste, JIT (Just in Time) manufacturing, kanban, lean manufacturing, types of production lines and scheduling environments, simulation and finally some distributions functions available in the simulation model.

Figure 1: Original Layout Model of FMS

This study has been realized on a model of a hypothetical FMS. By referring to Figure 1, it can be observe that the FMS consists of five multi-purpose CNC machines, each with automatic tool changing capability. Each machine is provided with limited input buffer. Having assumed that each machine will have ample capacity to store the required tools, the issue of tool availability is not considered in developing the model. An important feature of the model is that the machines are not available continuously. They can be subject to unexpected breakdowns. The system is capable of processing more than one part type simultaneously find each part type is associated with a probability of arrival. Each part is processed according to a predetermined sequence of operations. However, the machines that will process these operations are not fixed in advance, rather the routing decisions are made on-line based on current shop floor status data. Job pre-emption is allowed in case of an unexpected machine breakdown. The parts are introduced into the system through the loading station. The unloading station is the exit point for all the parts processed in the system. The system also includes a central Work-In-Process-Area (WIPA) to temporarily store the parts when the associated machine buffers reach the full capacity. The parts are transferred within the system by three AGVs each having one unit loading capacity. The place that the idle AGVs will wait for the next request depends on the AGV control policy employed. The model is developed on a microcomputer-based environment using SIMAN.

Part typeProbability of ArrivalOperation SequenceProcessing Time On Alternatives Machine (Minutes)
M1

M2

M3

M4

M5

1

20%

B

9

0

14

12

0

D

0

10

8

11

13

H

8

0

0

10

14

E

11

12

0

0

9

F

0

7

10

0

9

2

20%

B

11

0

7

9

0

C

0

8

0

11

0

A

12

0

10

0

0

D

0

10

8

6

7

G

6

7

0

0

8

3

10%

F

0

8

6

0

7

C

0

10

0

8

0

B

9

0

6

7

0

D

0

8

10

9

11

4

10%

C

0

7

0

6

0

A

9

0

12

0

0

I

0

0

6

8

0

B

8

0

9

7

0

G

11

10

0

0

12

5

20%

E

7

8

0

0

10

F

0

10

8

0

11

A

7

0

9

0

0

I

0

0

6

8

0

D

0

8

9

11

13

6

20%

H

7

0

0

8

10

B

10

0

8

12

0

C

0

11

0

9

0

G

10

8

0

0

6

E

6

8

0

0

10

I

0

0

10

7

0

Total

100%

30 Operations

141

150

156

159

150

Table 1: Part process Plan

As for the experimental conditions, it is assumed that the FMS studied in this paper can simultaneously process 6 types of parts. As it is seen in Table 1, the number of operations for each part ranges from 4 to 6. The three AGVs travel at a speed of 200 feet per minute, The time required for loading and unloading an AGV is one minute irrespective of part and operation type. For each experiment, the performance data on mean flow time is collected for a simulation period of 15360 minutes (16 days with 2 eight hour shifts) by generating 10 independent replications of the model. For each replication, the statistics are collected after a warm-up period of 2880 minutes (3 days with 2 eight hour shifts).

Seven types of waste

The word “waste” in manufacturing was defined as anything other than the MINIMUM amount of equipment, materials, parts, space and workers’ time, which are ABSOLUTELY ESSENTIAL(to ADD VALUE to the product) (M.K.Khan, 2010).It is an very unlikely event to occur because manufacturing waste does not add value to product. After years of research and improvement job has been done, Toyota identified seven source of waste in manufacturing plant, which is as follow :

i) Waste from over production:

This is considered as the most common waste found in manufacturing line. Mistakes occurred between marketing department and production department can lead to over production for demand and supply and cause delay for other parts or products.

ii) Waste of waiting time:

Usually it’s easy to identify. Time is wasted when operators just watching the machine to run or waiting for preceding parts to arrive. Bottleneck in production line is also considered as time wasting when all the parts are stacked while waiting to be processed.

iii) Transportation waste:

Bad housekeeping can cause long distance transportation waste or even double or triple material or part handling. Example: raw material stored in warehouse before it is brought to the line.

iv) Processing waste:

Additional process could lengthen a product or part processing time with unnecessary additional process.

v)Waste of motion:

Waste of motion is whatever time is not spent in adding value to the product should be eliminated. Poor machine or work layout could result to serious waste of motion.

vi) Waste from product defects :

When defects occurred at on station, other waste will also be raised up such as longer transportation time, waiting time, and scrapped or rework product may be produced as well.

vii) Inventory waste:

Inventory is also known as the root of all waste. It hides problems such as poor quality of product, machine breakdown and so on. It also lowers the level of inventory to expose those problems. Therefore, human always try to reduce or think a better way to handle inventory as shown in table 2.

Zero defectsZero setup time
Zero inventoriesZero part handling
Zero breakdownsZero lead time
Lot size of oneMatch products to customer requirement.

Table 2: Target for eliminating waste

JIT, Kanban and Lean Manufacturing

Just in Time (JIT)

The basic approach to the “Just-in-Time” (JIT) production system is to reduce product costs through the elimination of waste. In a production facility waste can be defined as defects, stockpiles, queues, idleness and delays. The manufacturing philosophy of JIT is well defined by the following analogy. Inventory is depicted by water, covering a bed of rocks in a lake. The rocks and the lakebed are representative of problems and the manufacturing floor, respectively. Lowering the water level will expose the rocks on the lakebed (Riggs, 1987). This is the basic theory behind the JIT production system. By eliminating inventory stockpiles on a plant floor, operating inefficiencies can be exposed. Therefore, producing or receiving inventory “just in time” for the next production process can eliminate stockpile inventory. This report will detail the history of the “Just-in-Time” production system. We will follow the JIT system from its conception in 1940 to its success today. The characteristics and advantages of

the JIT production system will be further outlined. We will also summarize the specific requirements for implementation of this system. Throughout this document we will aim to provide internet links, which will provide more information on the topic. Just-In-Time (JIT) manufacturing is a Japanese management philosophy applied in manufacturing. Essentially it involves having the right items with the right quality and quantity in the right place at the right time. Today, more and more North American firms are considering the JIT approach in response to an ever more competitive environment. The ability to manage inventory (which often accounts for as much as 80 percent of product cost) to coincide with market demand or changing product specifications can substantially boost profits and improve a manufacturer’s competitive position by reducing inventories and waste. Just In Time (JIT) is a management philosophy, an integrated approach to optimize the use of a company’s resources, namely, capital, equipment, and labor. The goal of JIT is the total elimination of waste in the manufacturing process.

JIT CONCEPT

JIT may be viewed as a production system, designed to improve overall productivity through the

Elimination of waste and which leads to improved quality. JIT is simple, efficient and minimize

waste.

The concept to produce and deliver finished goods just in time to be sold, subassembles just in

time to be assembled into sub assembled and purchase materials Just—in time to be transformed

into fabricated parts, is the concept behind JIT. It is dependent on the balance between the stability

of the user’s scheduled requirement and supplier’s manufacturing flexibility.

GOALS OF JIT

A system whose goal is to optimize process and procedures by continuously pursuing waste reduction.

It consists of 7 W’s to pursue the waste reduction. The wastes identified for reduction through

continuous improvements in production process are:

OBJECTIVES OF JIT

The basic objectives include:

n Low manufacturing and distribution cost.

n Reduced labor (both direct and indirect)

n Higher degree of product quality and less defects.

n Effective use of Working capital.

n Decrease in production lead-time.

n Reduced investments for in-process inventory.

n Increased productivity.

n Reduced space requirements.

n Faster reaction to demand. Change i.e. more flexibility to customer demand.

n Reduced overheads.

Chapter Three:

The Problem Defined

Chapter Four: Simulation Model Development

Chapter three demonstrates the process of preparing and constructing the simulation model. It will then be run and produce the results based on the data or assumption made in the simulation. It also serves as a reference for whoever uses or modifies the model in the future. Every steps will be shown and organized step by step for the ease of reading for reader.

Before starting creating the simulation model, author has spent a significant amount of time to learn how to program and create a simulation model using ARENA based on the data provided in literature review. This involved identifying which machine has the longest processing time and processes the most products. The book that author referred to was “Simulation with Arena” and listed in the reference list.
After all the data had been collected and gathered, now move to the development of the simulation model using ARENA. Firstly, run the ARENA software and it will show a blank page as shown is figure 1.
Figure 1: Blank page of ARENA

Secondly, drag and place the necessary object into the blank page and arrange it which is shown is figure 2.

Chapter Five:

Model Validation and Critique

Chapter Six: Analysis and Synthesis

Chapter Seven: Conclusions and Recommendation for Further Work

References

George L. KOVACS, S. K., Ildiko KMECS (1997). “Simulation of FMS with Application of Reuse and Object-Oriented Technology.” 13 -1

Tunali, S. (1995). “Simulation For Evaluating Machine And AGV Scheduling

Rules In An FMS Environment.” 433 – 438.

Khan.M.K (2010). Manufacturing Planning and Control. Lecture notes distributed for ENG4087M, Just-In-Time Systems(Lean Production), SOEDT, 1st Oct 2010.

W. David Kelton, Randall P. Sadowski, nancy B. Swets, 2010, Simulation with Arena, 5th edition, Mcgraw-Hill International Edition, Avenue of the America, new York.

http://www.seopromolinks.com/fms-advantages-disadvantages.asp

Categories
Free Essays

Ownership, Originality, Copying and Infringement of Software Copyright Background

Abstract

The law provides exclusive rights to the owners of copyright in order to give the owners of copyrighted work the ability to control the use of their work. Copyright protection is automatic and no registration needs to take place, however the only way to enforce such rights is by satisfying a number of different requirements. This often produces difficulty since it cannot always be ascertained who is the owner of a protected work and each case will be decided on its own facts.

Introduction

Software development is often a long process as it consists of the writing of a source code and subsequently converting it into object code. This essentially involves a considerable amount of skill and labour which is why businesses are keen to protect their works. The main form of protection that is available to the owners of such works is the law of copyright, as provided for in the Copyright, Design and Patents Act 1988 (CDPA). This is the area that will be considered when deciding whether FTS’s legal team should pursue an action against BMT. Accordingly, the various sections of the CDPA will be reviewed in order to consider whether the work is a protected form of copyright. Hence, it will be considered whether the work is original by distinguishing between an idea and an expression of an idea. Once this has been ascertained it will then be decided whether FTS is actually the author of the work. Provided that the copyright requirements have been satisfied, FTS will then have the onus of proving that Bill has infringed his copyright in the work.

Advice

Section 1 (1) (a) of the CDPA states that “copyright is a property right which subsists in original literary, dramatic, musical or artistic works.” Accordingly, as it is provided for under section 3 (1) (b) that a literary work includes a computer program FTS will have some form of protection available to them in relation to their product’s code. Nevertheless, it is stated under Article 1 (1) of the Software Directive that “protection shall apply to the expression in any form of a computer program. Ideas and principles which underlie any element of a computer program, including those which underlie its interfaces, are not protected by copyright under this Directive.” As such, FTS will need to consider whether the product’s code is an expression or a mere idea. This is likely to prove difficult given the complexity that is often afforded to software programs (Reed and Angel, 2003: 5), yet provided that FTS can satisfy all of the legal requirements associated with the law of copyright protection, then they will most likely be successful in their action.

First of all, FTS must demonstrate ‘originality’ by showing that the product’s code was created using skill, judgment and individual effort as in Infopaq International A/S v Danske Dagblades Forening [2009] EUECJ C-5/08 (16 July 2009). In addition, it must also be shown that the product’s code was in fact recorded, in writing or otherwise (section 3 (2) CDPA). This is likely to cause some problems for FTS, nonetheless, since it was evidenced in the Navitaire Inc v Easyjet Airline Co & Anor [2004] EWHC 1725 (Ch) case that where a user interface has been copied but the relevant elements relied upon, such as the source code, are not clearly recorded a lack of protection will exist. Here, Pumfrey J made obiter comments suggesting that user keyboard command codes might not be protected as copyright works because, due to the design of the program, they were not, themselves, recorded in the source code of the program. Consequently, it was made clear by Pumfrey J that “the program merely contained code which, when executed by the computer, would accept those commands and produce specified results.”

However, in Bezpecnostni softwarova asociace – Svaz softwarove ochrany v Ministerstvo kultury, Case C-393/09, 22 December 2010 it was held by the ECJ that the source code and object code of a computer program were forms of expression of the program and that they were therefore entitled to be protected by copyright (Campbell and Cotter, 1998: 140). Therefore, provided that FTS can demonstrate that their product’s code is original then it is likely that protection will ensue. The idea-expression dichotomy that exists in copyright law is reflected in recital 14 of the Software Directive where it is provided that; “logic, algorithms and programming languages are not protected insofar as they comprise ideas and principles.” Essentially, whilst Pumfrey J in Navitaire said that keyboard command codes may not be afforded copyright protection, he also noted that the question of whether computer languages should be excluded from such protection was not “entirely clear” and that the ECJ should therefore provide guidance on this matter.

In July 2010, this issue of was in fact revisited in SAS Institute v World Programming Ltd [2010] EWHC 1829 (Ch) when the High Court had to decide how Article 1 (2) of the Software Directive should be construed. Arnold J agreed with Pumfrey J’s view in Navitaire that Article 1 (2) should be interpreted as meaning that copyright in computer programs did not protect the following from being copied; programming languages, interfaces and the functionality of a computer program (Morton, 2013: 143). However, Arnold J stated that because of the uncertainty surrounding software programs a referral to the ECJ was required. On being referred by the High Court, the ECJ held that the copyright available to computer programs under the Software Directive does not protect the functionality of a computer program, its programming language or the format of data files used in it. In January 2013, the High Court applied the ECJ’s ruling, yet the High Court’s decision was upheld by the Court of Appeal in November 2013.

In accordance with this it is likely to prove very difficult for FTS to establish a claim in copyright and even if this can be ascertained, they will still have to demonstrate additional copyright requirements, such as ownership. Accordingly, software cases also give rise to ownership issues since there will often be more than one author due to the complexity and size of computer codes generally. Nevertheless, section 9 (1) CDPA makes it clear that the owner of a work is the person that has created it. As this is a computer-generated work, it will thus be the person who arranged for the creation of the work (section 9 (3)) unless he has created the work within the course of employment. If it is found that Bill created the work, FTS will still be the owner as the ownership of copyright remains vested in an employer if the creation was made during the course of employment (section 11 CDPA). Nevertheless, as evidenced in (1) Laurence John Wrenn (2) Integrated Multi-Media Solutions v Stephen Landamore [2007] EWHC 1833 (Ch) each case will be decided on its own facts. Here, it was held by the court that since there was a written agreement between the parties, an exclusive license could be implied.

Regardless of these difficulties, however, software can still be afforded copyright protection and the most common act of infringement that occurs in relation to source or object codes is unauthorised copying. Here, a distinction needs to be made between literal and non-literal copying. Literal copying occurs when an identical copy is made, whereas non-literal copying occurs when the structure, appearance or manner of the code has been copied (Pila, 2010: 229). In the case of literal copying, it will generally be easier to establish a claim of copyright since it will merely have to be shown that a substantial part of the code has been copied, which will be based upon the skill, labour and judgment that has been expended; Cantor Fitzgerald International and Another v Tradition (UK) Limited and Other [2000] RPC 95. In the event that there has been a non-literal copying of the works, it will be a lot more complex to establish. This is because it is often the case that two completely different programs will produce the same results. Therefore, although it might appear on the face of it that the program has been copied; this may not actually be the case.

In Thrustcode Ltd v WW Computing Ltd [1983] FSR 502 it was noted by the Court that; “the results produced by operating the program must not be confused with the program in which copyright in claimed.” Another consideration FTS will need to think about is if the codes were originally created by a third party. This is because if a third party has been commissioned to create the copyrighted work, ownership of that work will remain vested in the third party unless there has been an express agreement to the contrary (Lyons, 2005: 3). If no such agreement has been made, the court may imply an assignment or licence so that FTS can use the software, although the scope of an assignment or licence will depend entirely upon the facts of the case. In Robin Ray v Classic FM Plc [1998] FSR 622 it was held by the Court that both parties had accepted the law in relation to the implication of terms as to ownership and the licensing of copyright. Arguably, it is evident that whilst FTS may have a claim against Bill for copyright infringement, it will be very difficult to prove because of the complex nature of software copyright.

Conclusion

Overall, given the long process that is involved with software development, it is likely that FTS’s legal advisers will have to overcome a number of obstacles before they can establish a claim in copyright. Consequently, they will first need to establish that they are the author of the product’s code and that it was an original creation. Once this has been ascertained they will then need to show that their product has actually been infringed by Bill, which may prove extremely difficult given the complexity of software programs.

References

Campbell, D. and Cotter, S. (1998) Copyright Infringement, Kluwer Law International.

Lyons, T. (2005) Warning All Software Users, Electronic Business Law, Volume 7, Issue 9.

Morton, T. (2013) Emerging Technologies and Continuity, Tolley’s Practical Audit & Accounting, Volume 24, Issue 12.

Pila, J. (2010) Copyright and Its Categories of Original Works, Oxford Journal of Legal Studies, Volume 30, Issue 2.

Reed, C. and Angel, J. (2003) Computer Law, 5th Edition, OUP Oxford.

Case Law

Bezpecnostni softwarova asociace – Svaz softwarove ochrany v Ministerstvo kultury, Case C-393/09, 22 December 2010

Cantor Fitzgerald International and Another v Tradition (UK) Limited and Other [2000] RPC 95

Infopaq International A/S v Danske Dagblades Forening [2009] EUECJ C-5/08 (16 July 2009)

(1) Laurence John Wrenn (2) Integrated Multi-Media Solutions v Stephen Landamore [2007] EWHC 1833 (Ch)

Navitaire Inc v Easyjet Airline Co & Anor [2004] EWHC 1725 (Ch)

Robin Ray v Classic FM Plc [1998] FSR 622

SAS Institute v World Programming Ltd [2010] EWHC 1829 (Ch)

Thrustcode Ltd v WW Computing Ltd [1983] FSR 502

Categories
Free Essays

A Comparison of the Merits of using Software or Hardware Transactional Memory, against Traditional ‘Semaphore’ Locking

1. Introduction

Transactional memory is poised to take parallel programming a step higher by making it more efficient and much easier to achieve, compared to traditional ‘semaphore’ locking. This is because transactional memory is easier to handle when tasks are divided into several free threads, especially when these threads do not have common access to data. This implies that each section can operate on a processor core and that there is no connection between cores. It can be challenging when different task sections are not totally free – that is, several threads are forced to upgrade a singly shared value. The traditional approach to this utilizes locks and every time that a thread changes a shared value, it requires a lock. In this case, it is not possible for any other thread to receive the lock if another thread possesses it. Instead, the thread must wait until the thread that has the lock can change the shared value. This is likely to require a complex computation, and to take an extended amount of time before eventually releasing the lock (Bright, 2011). The release of the lock allows the waiting thread to continue. While this is an effective system, it faces several major challenges. A key issues concerns updates to the shared value that occurs occasionally; therefore, making it rare for a thread to wait at ‘no time’ – the state in which the lock based system can be efficient (Alexandrescu, 2004). Nonetheless, this efficiency commonly disappears every time updates to the shared value are made. Threads take too much time waiting for a lock to appear and are unable to provide any use when in this state.

2.Lock vs. Lock Free Data Structures

While it may seem easy to handle a singly shared value, locks are difficult to use correctly and this is a challenge faced in real programs. For instance, a program with dual locks 1 and 2 is likely to encounter a problem called a ‘deadlock’. A deadlock is a case whereby two threads require two locks and only have the option of acquiring lock 1 then 2, or 2 followed by 1. As long as each thread needs the lock in the same order this will not present any issues; however, if one thread needs lock 1 and the other requires lock 2 at the same time, this can cause a deadlock. In this situation, the first thread waits for lock 2 to become free and the second waits for 1 to be free. This exchange makes it difficult for both to succeed and results in a deadlock. This issue might appear to be preventable and only likely to occur when a program has dual locks; however, it can become a challenge to ensure each section performs the right function when this becomes more complex.

3. Transactional Memory

It can argued that transactional memory can solve the problem of lock conflicts. In this case of a deadlock, the programmers could mark the sections of their programs which change the shared data, so that each of the marked blocks is implemented within a transaction. This means that either the whole block executes, or none of it does. The program can therefore identify the shared value without locking it. This allows for the program to conduct all the necessary operations and write back the value, eventually committing the transaction (Bright, 2011). The key transaction occurs with the commit operation in which transactional memory system ascertains that shared data has been changed after the commencement of an operation. If this is not identified then the commit updates, allowing the thread to go ahead with its function. In case the shared value has not been modified, the transaction stops and the function of the thread is rolled back (Detlefs et al., 2001). In this instance, the program retries the operation.

It can be seen, therefore, transactional memory has several merits over traditional semaphore locking. For example, transactional memory is optimistic; this infers that the threads are positioned to succeed and do not look forward to acquiring a lock. This is in case the other thread makes an attempt to conduct a concurrent operation (Detlefs et al., 2001). In an instance of concurrent modifications occurs when a single thread is forced to retry its function. In addition to this, there are no deadlocks in transactional memory. Transactional memory is a programming approach that programmers are familiar with; the transaction and rollback process is not new to those who have handled relational databases because they offer a similar set of features. Nonetheless, blocks facilitate the ease of developing large programs that are correct (Alexandrescu, 2004). Blocks with nested atomic blocks will perform the correct function, although this is not true in the case of lock-based data structures.

4.Merits of the Hardware

There has been little attention given towards hardware compared to software-based implementations. It has also been noted that most real processors seldom support transactional memory and, therefore, modifications are necessary (Maged, 2004). However, there are systems that use virtual machines to undertake their primary function and in this regard there are changes for the .NET and Java virtual machines (Bright, 2011). In other cases, systems use native codes that require certain special operations to allow access to the shared data. This enables the transactional memory software to ascertain that the right operations have occurred in the background. Such implementations have the advantages of ensuring that the programs that are produced are bug-free (Detlefs et al., 2001).

The data in cache contains a version tag whereas the cache itself can maintain many versions of the same data. The software sends a signal to the processor to commence a transaction and performs the necessary action. This then signals the processor to commit the work. If other threads have changed the data, resulting in many versions, the cache will refuse the transaction and the software will be forced to try again. Should other versions not be created, then the data is committed (Bright, 2011).

This facility is also applicable for speculative execution. A thread can commence execution with data available, whereas speculatively conducting important work – instead of waiting for upgraded versions of all data needed – might mean waiting for additional cores to complete computation (Alexandrescu, 2004). If the data was upgraded, then the work that is committed provides a performance boost; the work had been completed before the delivery of the final value. Should the data turn out to be stale, then the speculative work is rejected and re-deployed with the correct value (Bright, 2011).

5.Logical Functions

A significant advantage that transactional memory has over traditional lock-based programs is that it support is an extension of a load-link or store conditional. Load-link is an undeveloped operation that can be implemented to build many types of thread-safe constructs (Maged, 2004). This comprises both mechanisms that are known, such as locks, and unconventional data structures, such as lists that can be changed by many threads at the same time without locking at all (Alexandrescu, 2004). The creation of software transactional memory is possible through the use of load-link or store conditional.

Load-link or store conditional contains two sections: firstly, it utilizes load link to recover the value from memory where it can then conduct the functions it needs to perform on that value. When there is a need to write a new value to the memory, this utilizes store conditional (Detlefs et al., 2001). Store conditional can only succeed if the memory value has not been changed after the load link. In case the value has been changed, the program has to return to the beginning and start again. These systems are restrictive because they do not follow writes to each memory bytes, but to the whole cache lines. This highlights the fact that store conditional has the potential to fail without modification of monitored value (Bright, 2011). Bright (2011) explains that store conditional is also most likely to fail if a context switch happens between the load link and store conditional. Transactional memory is a version of an enforced link load and store conditional; each thread can perform load link on several different memory locations (Maged, 2004). In addition to this, the commit operation does store conditional. This impacts on multiple locations at the same time, with each store either succeeding or failing (Bright, 2011).

6.Conclusion

In conclusion, a lock-free procedure is sure to sustain the progress of a thread executing a procedure. While some threads can be put on hold arbitrarily, one thread is certain to progress each move. The whole system can then make progress despite the fact that some threads might take longer than others. It can be seen, therefore, that the use of software or hardware transactional memory presents better ways of ensuring consistency of stored data when accessed and manipulated by several concurrent threads than traditional ‘semaphore’ locking. Consequently, lock-based programs fail to provide any of the above mentioned guarantees

7.References

Alexandrescu, A. (2004) Lock-Free Data Structures. Available at: http://www.drdobbs.com/lock-free-data-structures/184401865 [Accessed 12th March 2014].

Bright, P. (2011) IBM’s new transactional memory: make-or-break time for multithreaded revolution. Available at: http://arstechnica.com/gadgets/2011/08/ibms-new-transactional-memory-make-or-break-time-for-multithreaded-revolution/ [Accessed 12th March 2014].

Detlefs, D., Martin, P.A., Moir, M. & Steele, G.L., (2001) ‘The Twentieth Annual ACM Symposium on Principles of Distributed Computing’, in Lock-free Reference Counting, ACM Press: New York.

Maged, M.M. (2004) ‘Proceedings of the ACM SIGPLAN 2004 Conference on Programming Language Design and Implementation’, in Scalable Lock-free Dynamic Memory Allocation, ACM Press: New York.

Categories
Free Essays

Sage 50 Accounting Software Tutorial

Sage Tutorial Release 5. 3 The Sage Development Team September 10, 2012 CONTENTS 1 Introduction 1. 1 Installation 1. 2 Ways to Use Sage . . 1. 3 Longterm Goals for Sage . . 3 4 4 4 7 7 9 10 13 18 21 24 26 29 33 38 39 41 51 51 53 54 54 55 56 57 58 60 61 62 65 65 66 67 68 2 A Guided Tour 2. 1 Assignment, Equality, and Arithmetic 2. Getting Help . 2. 3 Functions, Indentation, and Counting 2. 4 Basic Algebra and Calculus . . 2. 5 Plotting . 2. 6 Some Common Issues with Functions 2. 7 Basic Rings . . 2. 8 Linear Algebra 2. 9 Polynomials . 2. 10 Parents, Conversion and Coercion . . 2. 11 Finite Groups, Abelian Groups . 2. 12 Number Theory . . 2. 13 Some More Advanced Mathematics 3 The Interactive Shell 3. 1 Your Sage Session . . 3. 2 Logging Input and Output . 3. 3 Paste Ignores Prompts 3. 4 Timing Commands . . 3. 5 Other IPython tricks . 3. 6 Errors and Exceptions 3. 7 Reverse Search and Tab Completion . . 3. 8 Integrated Help System . 3. 9 Saving and Loading Individual Objects 3. 10 Saving and Loading Complete Sessions 3. 11 The Notebook Interface . . 4 Interfaces 4. 1 GP/PARI 4. 2 GAP . . 4. 3 Singular . 4. 4 Maxima i 5 Sage, LaTeX and Friends 5. 1 Overview . . 5. 2 Basic Use . . 5. 3 Customizing LaTeX Generation . . 5. 4 Customizing LaTeX Processing . . 5. 5 An Example: Combinatorial Graphs with tkz-graph . 5. 6 A Fully Capable TeX Installation . 5. 7 External Programs . 71 71 72 73 75 76 77 77 79 79 80 81 81 82 84 85 86 86 88 91 93 93 94 95 97 97 99 101 103 105 6 Programming 6. 1 Loading and Attaching Sage ? les 6. 2 Creating Compiled Code . 6. 3 Standalone Python/Sage Scripts . 6. 4 Data Types 6. 5 Lists, Tuples, and Sequences 6. 6 Dictionaries 6. 7 Sets . 6. 8 Iterators . . 6. 9 Loops, Functions, Control Statements, and Comparisons 6. 10 Pro? ling . 7 Using SageTeX 8 . . Afterword 8. 1 Why Python? . . 8. I would like to contribute somehow. How can I? . 8. 3 How do I reference Sage? . 9 Appendix 9. 1 Arithmetical binary operator precedence . . 10 Bibliography 11 Indices and tables Bibliography Index ii Sage Tutorial, Release 5. 3 Sage is free, open-source math software that supports research and teaching in algebra, geometry, number theory, cryptography, numerical computation, and related areas.

Both the Sage development model and the technology in Sage itself are distinguished by an extremely strong emphasis on openness, community, cooperation, and collaboration: we are building the car, not reinventing the wheel. The overall goal of Sage is to create a viable, free, open-source alternative to Maple, Mathematica, Magma, and MATLAB. This tutorial is the best way to become familiar with Sage in only a few hours. You can read it in HTML or PDF versions, or from the Sage notebook (click Help, then click Tutorial to interactively work through the tutorial from within Sage).

This work is licensed under a Creative Commons Attribution-Share Alike 3. 0 License. CONTENTS 1 Sage Tutorial, Release 5. 3 2 CONTENTS CHAPTER ONE INTRODUCTION This tutorial should take at most 3-4 hours to fully work through. You can read it in HTML or PDF versions, or from the Sage notebook click Help, then click Tutorial to interactively work through the tutorial from within Sage. Though much of Sage is implemented using Python, no Python background is needed to read this tutorial. You will want to learn Python (a very fun language! ) at some point, and there are many excellent free resources for doing so including [PyT] and [Dive].

If you just want to quickly try out Sage, this tutorial is the place to start. For example: sage: 2 + 2 4 sage: factor(-2007) -1 * 3^2 * 223 sage: A = matrix(4,4, range(16)); A [ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15] sage: factor(A. charpoly()) x^2 * (x^2 – 30*x – 80) sage: m = matrix(ZZ,2, range(4)) sage: m[0,0] = m[0,0] – 3 sage: m [-3 1] [ 2 3] sage: E = EllipticCurve([1,2,3,4,5]); sage: E Elliptic Curve defined by y^2 + x*y + 3*y = x^3 + 2*x^2 + 4*x + 5 over Rational Field sage: E. anlist(10) [0, 1, 1, 0, -1, -3, 0, -1, -3, -3, -3] sage: E. ank() 1 sage: k = 1/(sqrt(3)*I + 3/4 + sqrt(73)*5/9); k 1/(I*sqrt(3) + 5/9*sqrt(73) + 3/4) sage: N(k) 0. 165495678130644 – 0. 0521492082074256*I sage: N(k,30) # 30 “bits” 0. 16549568 – 0. 052149208*I sage: latex(k) frac{1}{i , sqrt{3} + frac{5}{9} , sqrt{73} + frac{3}{4}} 3 Sage Tutorial, Release 5. 3 1. 1 Installation If you do not have Sage installed on a computer and just want to try some commands, use online at http://www. sagenb. org. See the Sage Installation Guide in the documentation section of the main Sage webpage [SA] for instructions on installing Sage on your computer.

Here we merely make a few comments. 1. The Sage download ? le comes with “batteries included”. In other words, although Sage uses Python, IPython, PARI, GAP, Singular, Maxima, NTL, GMP, and so on, you do not need to install them separately as they are included with the Sage distribution. However, to use certain Sage features, e. g. , Macaulay or KASH, you must install the relevant optional package or at least have the relevant programs installed on your computer already. Macaulay and KASH are Sage packages (for a list of available optional packages, type sage -optional, or browse the “Download” page on the Sage website). . The pre-compiled binary version of Sage (found on the Sage web site) may be easier and quicker to install than the source code version. Just unpack the ? le and run sage. 3. If you’d like to use the SageTeX package (which allows you to embed the results of Sage computations into a LaTeX ? le), you will need to make SageTeX known to your TeX distribution. To do this, see the section “Make SageTeX known to TeX” in the Sage installation guide (this link should take you to a local copy of the installation guide). It’s quite easy; you just need to set an environment variable or copy a single ? e to a directory that TeX will search. The documentation for using SageTeX is located in $SAGE_ROOT/local/share/texmf/tex/generic/sagetex/, where “$SAGE_ROOT” refers to the directory where you installed Sage – for example, /opt/sage-4. 2. 1. 1. 2 Ways to Use Sage You can use Sage in several ways. • Notebook graphical interface: see the section on the Notebook in the reference manual and The Notebook Interface below, • Interactive command line: see The Interactive Shell, • Programs: By writing interpreted and compiled programs in Sage (see Loading and Attaching Sage ? es and Creating Compiled Code), and • Scripts: by writing stand-alone Python scripts that use the Sage library (see Standalone Python/Sage Scripts). 1. 3 Longterm Goals for Sage • Useful: Sage’s intended audience is mathematics students (from high school to graduate school), teachers, and research mathematicians. The aim is to provide software that can be used to explore and experiment with mathematical constructions in algebra, geometry, number theory, calculus, numerical computation, etc. Sage helps make it easier to interactively experiment with mathematical objects. Ef? cient: Be fast. Sage uses highly-optimized mature software like GMP, PARI, GAP, and NTL, and so is very fast at certain operations. • Free and open source: The source code must be freely available and readable, so users can understand what the system is really doing and more easily extend it. Just as mathematicians gain a deeper understanding of a theorem by carefully reading or at least skimming the proof, people who do computations should be able to understand how the calculations work by reading documented source code. If you use Sage to do computations 4

Chapter 1. Introduction Sage Tutorial, Release 5. 3 in a paper you publish, you can rest assured that your readers will always have free access to Sage and all its source code, and you are even allowed to archive and re-distribute the version of Sage you used. • Easy to compile: Sage should be easy to compile from source for Linux, OS X and Windows users. This provides more ? exibility for users to modify the system. • Cooperation: Provide robust interfaces to most other computer algebra systems, including PARI, GAP, Singular, Maxima, KASH, Magma, Maple, and Mathematica.

Sage is meant to unify and extend existing math software. • Well documented: Tutorial, programming guide, reference manual, and how-to, with numerous examples and discussion of background mathematics. • Extensible: Be able to de? ne new data types or derive from built-in types, and use code written in a range of languages. • User friendly: It should be easy to understand what functionality is provided for a given object and to view documentation and source code. Also attain a high level of user support. 1. 3. Longterm Goals for Sage 5

Sage Tutorial, Release 5. 3 6 Chapter 1. Introduction CHAPTER TWO A GUIDED TOUR This section is a guided tour of some of what is available in Sage. For many more examples, see “Sage Constructions”, which is intended to answer the general question “How do I construct ? ”. See also the “Sage Reference Manual”, which has thousands more examples. Also note that you can interactively work through this tour in the Sage notebook by clicking the Help link. (If you are viewing the tutorial in the Sage notebook, press shift-enter to evaluate any input cell.

You can even edit the input before pressing shift-enter. On some Macs you might have to press shift-return rather than shift-enter. ) 2. 1 Assignment, Equality, and Arithmetic With some minor exceptions, Sage uses the Python programming language, so most introductory books on Python will help you to learn Sage. Sage uses = for assignment. It uses ==, =, < and > for comparison: sage: sage: 5 sage: True sage: False sage: True sage: True a = 5 a 2 == 2 2 == 3 2 < 3 a == 5 Sage provides all of the basic mathematical operations: age: 8 sage: 8 sage: 1 sage: 5/2 sage: 2 sage: True 2**3 2^3 10 % 3 10/4 10//4 # for integer arguments, // returns the integer quotient # # # ** means exponent ^ is a synonym for ** (unlike in Python) for integer arguments, % means mod, i. e. , remainder 4 * (10 // 4) + 10 % 4 == 10 7 Sage Tutorial, Release 5. 3 sage: 3^2*4 + 2%5 38 The computation of an expression like 3^2*4 + 2%5 depends on the order in which the operations are applied; this is speci? ed in the “operator precedence table” in Arithmetical binary operator precedence. Sage also provides many familiar mathematical functions; here are just a few examples: sage: sqrt(3. ) 1. 84390889145858 sage: sin(5. 135) -0. 912021158525540 sage: sin(pi/3) 1/2*sqrt(3) As the last example shows, some mathematical expressions return ‘exact’ values, rather than numerical approximations. To get a numerical approximation, use either the function n or the method n (and both of these have a longer name, numerical_approx, and the function N is the same as n)). These take optional arguments prec, which is the requested number of bits of precision, and digits, which is the requested number of decimal digits of precision; the default is 53 bits of precision. sage: exp(2) e^2 sage: n(exp(2)) 7. 8905609893065 sage: sqrt(pi). numerical_approx() 1. 77245385090552 sage: sin(10). n(digits=5) -0. 54402 sage: N(sin(10),digits=10) -0. 5440211109 sage: numerical_approx(pi, prec=200) 3. 1415926535897932384626433832795028841971693993751058209749 Python is dynamically typed, so the value referred to by each variable has a type associated with it, but a given variable may hold values of any Python type within a given scope: sage: sage: The C programming language, which is statically typed, is much different; a variable declared to hold an int can only hold an int in its scope.

A potential source of confusion in Python is that an integer literal that begins with a zero is treated as an octal number, i. e. , a number in base 8. sage: 9 sage: 9 sage: sage: ’11’ 011 8 + 1 n = 011 n. str(8) # string representation of n in base 8 8 Chapter 2. A Guided Tour Sage Tutorial, Release 5. 3 This is consistent with the C programming language. 2. 2 Getting Help Sage has extensive built-in documentation, accessible by typing the name of a function or a constant (for example), followed by a question mark: sage: tan?

Type: Definition: Docstring: tan( [noargspec] ) The tangent function EXAMPLES: sage: tan(pi) 0 sage: tan(3. 1415) -0. 0000926535900581913 sage: tan(3. 1415/4) 0. 999953674278156 sage: tan(pi/4) 1 sage: tan(1/2) tan(1/2) sage: RR(tan(1/2)) 0. 546302489843790 sage: log2? Type: Definition: log2( [noargspec] ) Docstring: The natural logarithm of the real number 2. EXAMPLES: sage: log2 log2 sage: float(log2) 0. 69314718055994529 sage: RR(log2) 0. 693147180559945 sage: R = RealField(200); R Real Field with 200 bits of precision sage: R(log2) 0. 9314718055994530941723212145817656807550013436025525412068 sage: l = (1-log2)/(1+log2); l (1 – log(2))/(log(2) + 1) sage: R(l) 0. 18123221829928249948761381864650311423330609774776013488056 sage: maxima(log2) log(2) sage: maxima(log2). float() . 6931471805599453 sage: gp(log2) 0. 6931471805599453094172321215 # 32-bit 0. 69314718055994530941723212145817656807 # 64-bit sage: sudoku? 2. 2. Getting Help 9 Sage Tutorial, Release 5. 3 File: Type: Definition: Docstring: sage/local/lib/python2. 5/site-packages/sage/games/sudoku. py sudoku(A) Solve the 9×9 Sudoku puzzle defined by the matrix A.

EXAMPLE: sage: A = matrix(ZZ,9,[5,0,0, 0,8,0, 0,4,9, 0,0,0, 5,0,0, 0,3,0, 0,6,7, 3,0,0, 0,0,1, 1,5,0, 0,0,0, 0,0,0, 0,0,0, 2,0,8, 0,0,0, 0,0,0, 0,0,0, 0,1,8, 7,0,0, 0,0,4, 1,5,0, 0,3,0, 0,0,2, 0,0,0, 4,9,0, 0,5,0, 0,0,3]) sage: A [5 0 0 0 8 0 0 4 9] [0 0 0 5 0 0 0 3 0] [0 6 7 3 0 0 0 0 1] [1 5 0 0 0 0 0 0 0] [0 0 0 2 0 8 0 0 0] [0 0 0 0 0 0 0 1 8] [7 0 0 0 0 4 1 5 0] [0 3 0 0 0 2 0 0 0] [4 9 0 0 5 0 0 0 3] sage: sudoku(A) [5 1 3 6 8 7 2 4 9] [8 4 9 5 2 1 6 3 7] [2 6 7 3 4 9 5 8 1] [1 5 8 4 6 3 9 7 2] [9 7 4 2 1 8 3 6 5] [3 2 6 7 9 5 4 1 8] [7 8 2 9 3 4 1 5 6] [6 3 5 1 7 2 8 9 4] [4 9 1 8 5 6 7 2 3]

Sage also provides ‘Tab completion’: type the ? rst few letters of a function and then hit the tab key. For example, if you type ta followed by TAB, Sage will print tachyon, tan, tanh, taylor. This provides a good way to ? nd the names of functions and other structures in Sage. 2. 3 Functions, Indentation, and Counting To de? ne a new function in Sage, use the def command and a colon after the list of variable names. For example: sage: def is_even(n): return n%2 == 0 sage: is_even(2) True sage: is_even(3) False Note: Depending on which version of the tutorial you are viewing, you may see three dots n the second line of this example. Do not type them; they are just to emphasize that the code is indented. Whenever this is the case, press [Return/Enter] once at the end of the block to insert a blank line and conclude the function de? nition. You do not specify the types of any of the input arguments. You can specify multiple inputs, each of which may have an optional default value. For example, the function below defaults to divisor=2 if divisor is not speci? ed. 10 Chapter 2. A Guided Tour Sage Tutorial, Release 5. 3 sage: sage: True sage: True sage: False ef is_divisible_by(number, divisor=2): return number%divisor == 0 is_divisible_by(6,2) is_divisible_by(6) is_divisible_by(6, 5) You can also explicitly specify one or either of the inputs when calling the function; if you specify the inputs explicitly, you can give them in any order: sage: is_divisible_by(6, divisor=5) False sage: is_divisible_by(divisor=2, number=6) True In Python, blocks of code are not indicated by curly braces or begin and end blocks as in many other languages. Instead, blocks of code are indicated by indentation, which must match up exactly.

For example, the following is a syntax error because the return statement is not indented the same amount as the other lines above it. sage: def even(n): v = [] for i in range(3,n): if i % 2 == 0: v. append(i) return v Syntax Error: return v If you ? x the indentation, the function works: sage: def even(n): v = [] for i in range(3,n): if i % 2 == 0: v. append(i) return v sage: even(10) [4, 6, 8] Semicolons are not needed at the ends of lines; a line is in most cases ended by a newline. However, you can put multiple statements on one line, separated by semicolons: sage: a = 5; b = a + 3; c = b^2; c 64

If you would like a single line of code to span multiple lines, use a terminating backslash: sage: 2 + 3 5 In Sage, you count by iterating over a range of integers. For example, the ? rst line below is exactly like for(i=0; i x^2 sage: g(3) 9 sage: Dg = g. derivative(); Dg x |–> 2*x sage: Dg(3) 6 sage: type(g) sage: plot(g, 0, 2) Note that while g is a callable symbolic expression, g(x) is a related, but different sort of object, which can also be plotted, differentated, etc. , albeit with some issues: see item 5 below for an illustration. sage: x^2 sage: g(x). derivative() plot(g(x), 0, 2) 3. Use a pre-de? ed Sage ‘calculus function’. These can be plotted, and with a little help, differentiated, and integrated. sage: type(sin) sage: plot(sin, 0, 2) sage: type(sin(x)) sage: plot(sin(x), 0, 2) By itself, sin cannot be differentiated, at least not to produce cos. sage: f = sin sage: f. derivative() Traceback (most recent call last): AttributeError: Using f = sin(x) instead of sin works, but it is probably even better to use f(x) = sin(x) to de? ne a callable symbolic expression. sage: S(x) = sin(x) sage: S. derivative() x |–> cos(x) Here are some common problems, with explanations: 4. Accidental evaluation. sage: def h(x): f x 1 to 0. sage: G = DirichletGroup(12) sage: G. list() [Dirichlet character modulo 12 of conductor 1 mapping 7 |–> 1, 5 |–> 1, Dirichlet character modulo 12 of conductor 4 mapping 7 |–> -1, 5 |–> 1, Dirichlet character modulo 12 of conductor 3 mapping 7 |–> 1, 5 |–> -1, Dirichlet character modulo 12 of conductor 12 mapping 7 |–> -1, 5 |–> -1] sage: G. gens() (Dirichlet character modulo 12 of conductor 4 mapping 7 |–> -1, 5 |–> 1, Dirichlet character modulo 12 of conductor 3 mapping 7 |–> 1, 5 |–> -1) sage: len(G) 4 Having created the group, we next create an element and compute with it. age: G = DirichletGroup(21) sage: chi = G. 1; chi Dirichlet character modulo 21 of conductor 7 mapping 8 |–> 1, 10 |–> zeta6 sage: chi. values() [0, 1, zeta6 – 1, 0, -zeta6, -zeta6 + 1, 0, 0, 1, 0, zeta6, -zeta6, 0, -1, 0, 0, zeta6 – 1, zeta6, 0, -zeta6 + 1, -1] sage: chi. conductor() 7 sage: chi. modulus() 21 sage: chi. order() 6 sage: chi(19) -zeta6 + 1 sage: chi(40) -zeta6 + 1 It is also possible to compute the action of the Galois group Gal(Q(? N )/Q) on these characters, as well as the direct product decomposition corresponding to the factorization of the modulus. sage: chi. alois_orbit() [Dirichlet character modulo 21 of conductor 7 mapping 8 |–> 1, 10 |–> zeta6, 2. 13. Some More Advanced Mathematics 45 Sage Tutorial, Release 5. 3 Dirichlet character modulo 21 of conductor 7 mapping 8 |–> 1, 10 |–> -zeta6 + 1] sage: go = G. galois_orbits() sage: [len(orbit) for orbit in go] [1, 2, 2, 1, 1, 2, 2, 1] sage: [ Group 6 and Group 6 and ] G. decomposition() of Dirichlet characters of modulus 3 over Cyclotomic Field of order degree 2, of Dirichlet characters of modulus 7 over Cyclotomic Field of order degree 2 Next, we construct the group of Dirichlet characters mod 20, but with values n Q(i): sage: sage: sage: Group K. = NumberField(x^2+1) G = DirichletGroup(20,K) G of Dirichlet characters of modulus 20 over Number Field in i with defining polynomial x^2 + 1 We next compute several invariants of G: sage: G. gens() (Dirichlet character modulo 20 of conductor 4 mapping 11 |–> -1, 17 |–> 1, Dirichlet character modulo 20 of conductor 5 mapping 11 |–> 1, 17 |–> i) sage: G. unit_gens() [11, 17] sage: G. zeta() i sage: G. zeta_order() 4 In this example we create a Dirichlet character with values in a number ? eld. We explicitly specify the choice of root of unity by the third argument to DirichletGroup below. age: x = polygen(QQ, ’x’) sage: K = NumberField(x^4 + 1, ’a’); a = K. 0 sage: b = K. gen(); a == b True sage: K Number Field in a with defining polynomial x^4 + 1 sage: G = DirichletGroup(5, K, a); G Group of Dirichlet characters of modulus 5 over Number Field in a with defining polynomial x^4 + 1 sage: chi = G. 0; chi Dirichlet character modulo 5 of conductor 5 mapping 2 |–> a^2 sage: [(chi^i)(2) for i in range(4)] [1, a^2, -1, -a^2] Here NumberField(x^4 + 1, ’a’) tells Sage to use the symbol “a” in printing what K is (a Number Field in a with de? ning polynomial x4 + 1). The name “a” is undeclared at this point.

Once a = K. 0 (or equivalently a = K. gen()) is evaluated, the symbol “a” represents a root of the generating polynomial x4 + 1. 46 Chapter 2. A Guided Tour Sage Tutorial, Release 5. 3 2. 13. 4 Modular Forms Sage can do some computations related to modular forms, including dimensions, computing spaces of modular symbols, Hecke operators, and decompositions. There are several functions available for computing dimensions of spaces of modular forms. For example, sage: dimension_cusp_forms(Gamma0(11),2) 1 sage: dimension_cusp_forms(Gamma0(1),12) 1 sage: dimension_cusp_forms(Gamma1(389),2) 6112

Next we illustrate computation of Hecke operators on a space of modular symbols of level 1 and weight 12. sage: M = ModularSymbols(1,12) sage: M. basis() ([X^8*Y^2,(0,0)], [X^9*Y,(0,0)], [X^10,(0,0)]) sage: t2 = M. T(2) sage: t2 Hecke operator T_2 on Modular Symbols space of dimension 3 for Gamma_0(1) of weight 12 with sign 0 over Rational Field sage: t2. matrix() [ -24 0 0] [ 0 -24 0] [4860 0 2049] sage: f = t2. charpoly(’x’); f x^3 – 2001*x^2 – 97776*x – 1180224 sage: factor(f) (x – 2049) * (x + 24)^2 sage: M. T(11). charpoly(’x’). factor() (x – 285311670612) * (x – 534612)^2

We can also create spaces for ? 0 (N ) and ? 1 (N ). sage: ModularSymbols(11,2) Modular Symbols space of dimension 3 for Gamma_0(11) of weight 2 with sign 0 over Rational Field sage: ModularSymbols(Gamma1(11),2) Modular Symbols space of dimension 11 for Gamma_1(11) of weight 2 with sign 0 and over Rational Field Let’s compute some characteristic polynomials and q-expansions. sage: M = ModularSymbols(Gamma1(11),2) sage: M. T(2). charpoly(’x’) x^11 – 8*x^10 + 20*x^9 + 10*x^8 – 145*x^7 + 229*x^6 + 58*x^5 – 360*x^4 + 70*x^3 – 515*x^2 + 1804*x – 1452 sage: M. T(2). charpoly(’x’). actor() (x – 3) * (x + 2)^2 * (x^4 – 7*x^3 + 19*x^2 – 23*x + 11) * (x^4 – 2*x^3 + 4*x^2 + 2*x + 11) sage: S = M. cuspidal_submodule() sage: S. T(2). matrix() [-2 0] [ 0 -2] sage: S. q_expansion_basis(10) [ q – 2*q^2 – q^3 + 2*q^4 + q^5 + 2*q^6 – 2*q^7 – 2*q^9 + O(q^10) ] 2. 13. Some More Advanced Mathematics 47 Sage Tutorial, Release 5. 3 We can even compute spaces of modular symbols with character. sage: G = DirichletGroup(13) sage: e = G. 0^2 sage: M = ModularSymbols(e,2); M Modular Symbols space of dimension 4 and level 13, weight 2, character [zeta6], sign 0, over Cyclotomic Field of order 6 and degree 2 sage: M.

T(2). charpoly(’x’). factor() (x – 2*zeta6 – 1) * (x – zeta6 – 2) * (x + zeta6 + 1)^2 sage: S = M. cuspidal_submodule(); S Modular Symbols subspace of dimension 2 of Modular Symbols space of dimension 4 and level 13, weight 2, character [zeta6], sign 0, over Cyclotomic Field of order 6 and degree 2 sage: S. T(2). charpoly(’x’). factor() (x + zeta6 + 1)^2 sage: S. q_expansion_basis(10) [ q + (-zeta6 – 1)*q^2 + (2*zeta6 – 2)*q^3 + zeta6*q^4 + (-2*zeta6 + 1)*q^5 + (-2*zeta6 + 4)*q^6 + (2*zeta6 – 1)*q^8 – zeta6*q^9 + O(q^10) ]

Here is another example of how Sage can compute the action of Hecke operators on a space of modular forms. sage: T = ModularForms(Gamma0(11),2) sage: T Modular Forms space of dimension 2 for Congruence Subgroup Gamma0(11) of weight 2 over Rational Field sage: T. degree() 2 sage: T. level() 11 sage: T. group() Congruence Subgroup Gamma0(11) sage: T. dimension() 2 sage: T. cuspidal_subspace() Cuspidal subspace of dimension 1 of Modular Forms space of dimension 2 for Congruence Subgroup Gamma0(11) of weight 2 over Rational Field sage: T. isenstein_subspace() Eisenstein subspace of dimension 1 of Modular Forms space of dimension 2 for Congruence Subgroup Gamma0(11) of weight 2 over Rational Field sage: M = ModularSymbols(11); M Modular Symbols space of dimension 3 for Gamma_0(11) of weight 2 with sign 0 over Rational Field sage: M. weight() 2 sage: M. basis() ((1,0), (1,8), (1,9)) sage: M. sign() 0 Let Tp denote the usual Hecke operators (p prime). How do the Hecke operators T2 , T3 , T5 act on the space of modular symbols? sage: M. T(2). matrix() [ 3 0 -1] [ 0 -2 0] [ 0 0 -2] sage: M. T(3). matrix() [ 4 0 -1] 8 Chapter 2. A Guided Tour Sage Tutorial, Release 5. 3 [ 0 -1 0] [ 0 0 -1] sage: M. T(5). matrix() [ 6 0 -1] [ 0 1 0] [ 0 0 1] 2. 13. Some More Advanced Mathematics 49 Sage Tutorial, Release 5. 3 50 Chapter 2. A Guided Tour CHAPTER THREE THE INTERACTIVE SHELL In most of this tutorial, we assume you start the Sage interpreter using the sage command. This starts a customized version of the IPython shell, and imports many functions and classes, so they are ready to use from the command prompt. Further customization is possible by editing the $SAGE_ROOT/ipythonrc ? le.

Upon starting Sage, you get output similar to the following: ———————————————————————| SAGE Version 3. 1. 1, Release Date: 2008-05-24 | | Type notebook() for the GUI, and license() for information. | ———————————————————————- sage: To quit Sage either press Ctrl-D or type quit or exit. sage: quit Exiting SAGE (CPU time 0m0. 00s, Wall time 0m0. 89s) The wall time is the time that elapsed on the clock hanging from your wall. This is relevant, since CPU time does not track time used by subprocesses like GAP or Singular. Avoid killing a Sage process with kill -9 from a terminal, since Sage might not kill child processes, e. g. , Maple processes, or cleanup temporary ? les from $HOME/. sage/tmp. ) 3. 1 Your Sage Session The session is the sequence of input and output from when you start Sage until you quit. Sage logs all Sage input, via IPython. In fact, if you’re using the interactive shell (not the notebook interface), then at any point you may type %history (or %hist) to get a listing of all input lines typed so far. You can type ? at the Sage prompt to ? nd out more about IPython, e. g. “IPython offers numbered prompts with input and output caching. All input is saved and can be retrieved as variables (besides the usual arrow key recall). The following GLOBAL variables always exist (so don’t overwrite them! )”: _: previous input (interactive shell and notebook) __: next previous input (interactive shell only) _oh : list of all inputs (interactive shell only) Here is an example: sage: factor(100) _1 = 2^2 * 5^2 sage: kronecker_symbol(3,5) 51 Sage Tutorial, Release 5. 3 _2 = -1 sage: %hist #This only works from the interactive shell, not the notebook. : factor(100) 2: kronecker_symbol(3,5) 3: %hist sage: _oh _4 = {1: 2^2 * 5^2, 2: -1} sage: _i1 _5 = ’factor(ZZ(100))
’ sage: eval(_i1) _6 = 2^2 * 5^2 sage: %hist 1: factor(100) 2: kronecker_symbol(3,5) 3: %hist 4: _oh 5: _i1 6: eval(_i1) 7: %hist We omit the output numbering in the rest of this tutorial and the other Sage documentation. You can also store a list of input from session in a macro for that session. sage: E = EllipticCurve([1,2,3,4,5]) sage: M = ModularSymbols(37) sage: %hist 1: E = EllipticCurve([1,2,3,4,5]) 2: M = ModularSymbols(37) 3: %hist sage: %macro em 1-2 Macro ‘em‘ created.

To execute, type its name (without quotes). sage: E Elliptic Curve defined by y^2 + x*y + 3*y = x^3 + 2*x^2 + 4*x + 5 over Rational Field sage: E = 5 sage: M = None sage: em Executing Macro sage: E Elliptic Curve defined by y^2 + x*y + 3*y = x^3 + 2*x^2 + 4*x + 5 over Rational Field When using the interactive shell, any UNIX shell command can be executed from Sage by prefacing it by an exclamation point !. For example, sage: ! ls auto example. sage glossary. tex t tmp tut. log tut. tex returns the listing of the current directory. The PATH has the Sage bin directory at the front, so if you run gp, gap, singular, maxima, etc. you get the versions included with Sage. sage: ! gp Reading GPRC: /etc/gprc Done. GP/PARI CALCULATOR Version 2. 2. 11 (alpha) i686 running linux (ix86/GMP-4. 1. 4 kernel) 32-bit version 52 Chapter 3. The Interactive Shell Sage Tutorial, Release 5. 3 sage: ! singular SINGULAR A Computer Algebra System for Polynomial Computations 0< by: G. -M. Greuel, G. Pfister, H. Schoenemann FB Mathematik der Universitaet, D-67653 Kaiserslautern October 2005 / / Development version 3-0-1 3. 2 Logging Input and Output Logging your Sage session is not the same as saving it (see Saving and Loading Complete Sessions for that).

To log input (and optionally output) use the logstart command. Type logstart? for more details. You can use this command to log all input you type, all output, and even play back that input in a future session (by simply reloading the log ? le). [email protected]:~$ sage ———————————————————————| SAGE Version 3. 0. 2, Release Date: 2008-05-24 | | Type notebook() for the GUI, and license() for information. | ———————————————————————sage: logstart setup Activating auto-logging. Current session state plus future input saved.

Filename : setup Mode : backup Output logging : False Timestamping : False State : active sage: E = EllipticCurve([1,2,3,4,5]). minimal_model() sage: F = QQ^3 sage: x,y = QQ[’x,y’]. gens() sage: G = E. gens() sage: Exiting SAGE (CPU time 0m0. 61s, Wall time 0m50. 39s). [email protected]:~$ sage ———————————————————————| SAGE Version 3. 0. 2, Release Date: 2008-05-24 | | Type notebook() for the GUI, and license() for information. | ———————————————————————sage: load “setup” Loading log file one line at a time

Finished replaying log file sage: E Elliptic Curve defined by y^2 + x*y = x^3 – x^2 + 4*x + 3 over Rational Field sage: x*y x*y sage: G [(2 : 3 : 1)] If you use Sage in the Linux KDE terminal konsole then you can save your session as follows: after starting Sage in konsole, select “settings”, then “history ”, then “set unlimited”. When you are ready to save your session, select “edit” then “save history as ” and type in a name to save the text of your session to your computer. After saving this ? le, you could then load it into an editor, such as xemacs, and print it. 3. 2. Logging Input and Output 53 Sage Tutorial, Release 5. 3 3. Paste Ignores Prompts Suppose you are reading a session of Sage or Python computations and want to copy them into Sage. But there are annoying >>> or sage: prompts to worry about. In fact, you can copy and paste an example, including the prompts if you want, into Sage. In other words, by default the Sage parser strips any leading >>> or sage: prompt before passing it to Python. For example, sage: 2^10 1024 sage: sage: sage: 2^10 1024 sage: >>> 2^10 1024 3. 4 Timing Commands If you place the %time command at the beginning of an input line, the time the command takes to run will be displayed after the output.

For example, we can compare the running time for a certain exponentiation operation in several ways. The timings below will probably be much different on your computer, or even between different versions of Sage. First, native Python: sage: %time a = int(1938)^int(99484) CPU times: user 0. 66 s, sys: 0. 00 s, total: 0. 66 s Wall time: 0. 66 This means that 0. 66 seconds total were taken, and the “Wall time”, i. e. , the amount of time that elapsed on your wall clock, is also 0. 66 seconds. If your computer is heavily loaded with other programs, the wall time may be much larger than the CPU time.

Next we time exponentiation using the native Sage Integer type, which is implemented (in Cython) using the GMP library: sage: %time a = 1938^99484 CPU times: user 0. 04 s, sys: 0. 00 s, total: 0. 04 s Wall time: 0. 04 Using the PARI C-library interface: sage: %time a = pari(1938)^pari(99484) CPU times: user 0. 05 s, sys: 0. 00 s, total: 0. 05 s Wall time: 0. 05 GMP is better, but only slightly (as expected, since the version of PARI built for Sage uses GMP for integer arithmetic). You can also time a block of commands using the cputime command, as illustrated below: sage: sage: sage: sage: sage: 0. 4 t = cputime() a = int(1938)^int(99484) b = 1938^99484 c = pari(1938)^pari(99484) cputime(t) # somewhat random output sage: cputime? Return the time in CPU second since SAGE started, or with optional argument t, return the time since time t. 54 Chapter 3. The Interactive Shell Sage Tutorial, Release 5. 3 INPUT: t — (optional) float, time in CPU seconds OUTPUT: float — time in CPU seconds The walltime command behaves just like the cputime command, except that it measures wall time. We can also compute the above power in some of the computer algebra systems that Sage includes.

In each case we execute a trivial command in the system, in order to start up the server for that program. The most relevant time is the wall time. However, if there is a signi? cant difference between the wall time and the CPU time then this may indicate a performance issue worth looking into. sage: time 1938^99484; CPU times: user 0. 01 s, sys: 0. 00 s, total: Wall time: 0. 01 sage: gp(0) 0 sage: time g = gp(’1938^99484’) CPU times: user 0. 00 s, sys: 0. 00 s, total: Wall time: 0. 04 sage: maxima(0) 0 sage: time g = maxima(’1938^99484’) CPU times: user 0. 00 s, sys: 0. 00 s, total: Wall time: 0. 0 sage: kash(0) 0 sage: time g = kash(’1938^99484’) CPU times: user 0. 00 s, sys: 0. 00 s, total: Wall time: 0. 04 sage: mathematica(0) 0 sage: time g = mathematica(’1938^99484’) CPU times: user 0. 00 s, sys: 0. 00 s, total: Wall time: 0. 03 sage: maple(0) 0 sage: time g = maple(’1938^99484’) CPU times: user 0. 00 s, sys: 0. 00 s, total: Wall time: 0. 11 sage: gap(0) 0 sage: time g = gap. eval(’1938^99484;;’) CPU times: user 0. 00 s, sys: 0. 00 s, total: Wall time: 1. 02 0. 01 s 0. 00 s 0. 00 s 0. 00 s 0. 00 s 0. 00 s 0. 00 s Note that GAP and Maxima are the slowest in this test (this was run on the machine sage. ath. washington. edu). Because of the pexpect interface overhead, it is perhaps unfair to compare these to Sage, which is the fastest. 3. 5 Other IPython tricks As noted above, Sage uses IPython as its front end, and so you can use any of IPython’s commands and features. You can read the full IPython documentation. Meanwhile, here are some fun tricks – these are called “Magic commands” in IPython: • You can use %bg to run a command in the background, and then use jobs to access the results, as follows. 3. 5. Other IPython tricks 55 Sage Tutorial, Release 5. 3 The comments not tested are here because the %bg syntax doesn’t work well with Sage’s automatic testing facility. If you type this in yourself, it should work as written. This is of course most useful with commands which take a while to complete. ) sage: def quick(m): return 2*m sage: %bg quick(20) # not tested Starting job # 0 in a separate thread. sage: jobs. status() # not tested Completed jobs: 0 : quick(20) sage: jobs[0]. result # the actual answer, not tested 40 Note that jobs run in the background don’t use the Sage preparser – see The Pre-Parser: Differences between Sage and Python for more information.

One (perhaps awkward) way to get around this would be to run sage: %bg eval(preparse(’quick(20)’)) # not tested It is safer and easier, though, to just use %bg on commands which don’t require the preparser. • You can use %edit (or %ed or ed) to open an editor, if you want to type in some complex code. Before you start Sage, make sure that the EDITOR environment variable is set to your favorite editor (by putting export EDITOR=/usr/bin/emacs or export EDITOR=/usr/bin/vim or something similar in the appropriate place, like a . profile ? le). From the Sage prompt, executing %edit will open up the named editor. Then within the editor you can de? e a function: def some_function(n): return n**2 + 3*n + 2 Save and quit from the editor. For the rest of your Sage session, you can then use some_function. If you want to modify it, type %edit some_function from the Sage prompt. • If you have a computation and you want to modify its output for another use, perform the computation and type %rep: this will place the output from the previous command at the Sage prompt, ready for you to edit it. sage: f(x) = cos(x) sage: f(x). derivative(x) -sin(x) At this point, if you type %rep at the Sage prompt, you will get a new Sage prompt, followed by -sin(x), with the cursor at the end of the line.

For more, type %quickref to get a quick reference guide to IPython. As of this writing (April 2011), Sage uses version 0. 9. 1 of IPython, and the documentation for its magic commands is available online. 3. 6 Errors and Exceptions When something goes wrong, you will usually see a Python “exception”. Python even tries to suggest what raised the exception. Often you see the name of the exception, e. g. , NameError or ValueError (see the Python Reference Manual [Py] for a complete list of exceptions). For example, sage: 3_2 ———————————————————–File “”, line 1 ZZ(3)_2 ^ SyntaxError: invalid syntax 6 Chapter 3. The Interactive Shell Sage Tutorial, Release 5. 3 sage: EllipticCurve([0,infinity]) ———————————————————–Traceback (most recent call last): TypeError: Unable to coerce Infinity () to Rational The interactive debugger is sometimes useful for understanding what went wrong. You can toggle it on or off using %pdb (the default is off). The prompt ipdb> appears if an exception is raised and the debugger is on. From within the debugger, you can print the state of any local variable, and move up and down the execution stack.

For example, sage: %pdb Automatic pdb calling has been turned ON sage: EllipticCurve([1,infinity]) ————————————————————————– Traceback (most recent call last) ipdb> For a list of commands in the debugger, type ? at the ipdb> prompt: ipdb> ? Documented commands (type help ): ======================================== EOF break commands debug h a bt condition disable help alias c cont down ignore args cl continue enable j b clear d exit jump whatis where Miscellaneous help topics: ========================== exec pdb Undocumented commands: ====================== retval rv list n next p pdef pdoc pinfo pp q quit r return s step tbreak u unalias up w Type Ctrl-D or quit to return to Sage. 3. 7 Reverse Search and Tab Completion Reverse search: Type the beginning of a command, then Ctrl-p (or just hit the up arrow key) to go back to each line you have entered that begins in that way. This works even if you completely exit Sage and restart later. You can also do a reverse search through the history using Ctrl-r. All these features use the readline package, which is available on most ? avors of Linux. To illustrate tab completion, ? st create the three dimensional vector space V = Q3 as follows: sage: V = VectorSpace(QQ,3) sage: V Vector space of dimension 3 over Rational Field You can also use the following more concise notation: 3. 7. Reverse Search and Tab Completion 57 Sage Tutorial, Release 5. 3 sage: V = QQ^3 Then it is easy to list all member functions for V using tab completion. Just type V. , then type the [tab key] key on your keyboard: sage: V. [tab key] V. _VectorSpace_generic__base_field V. ambient_space V. base_field V. base_ring V. basis V. coordinates V. zero_vector If you type the ? st few letters of a function, then [tab key], you get only functions that begin as indicated. sage: V. i[tab key] V. is_ambient V. is_dense V. is_full V. is_sparse If you wonder what a particular function does, e. g. , the coordinates function, type V. coordinates? for help or V. coordinates?? for the source code, as explained in the next section. 3. 8 Integrated Help System Sage features an integrated help facility. Type a function name followed by ? for the documentation for that function. sage: V = QQ^3 sage: V. coordinates? Type: instancemethod Base Class: String Form: Namespace: Interactive File: /home/was/s/local/lib/python2. /site-packages/sage/modules/f ree_module. py Definition: V. coordinates(self, v) Docstring: Write v in terms of the basis for self. Returns a list c such that if B is the basis for self, then sum c_i B_i = v. If v is not in self, raises an ArithmeticError exception. EXAMPLES: sage: M = FreeModule(IntegerRing(), 2); M0,M1=M. gens() sage: W = M. submodule([M0 + M1, M0 – 2*M1]) sage: W. coordinates(2*M0-M1) [2, -1] As shown above, the output tells you the type of the object, the ? le in which it is de? ned, and a useful description of the function with examples that you can paste into your current session.

Almost all of these examples are regularly automatically tested to make sure they work and behave exactly as claimed. 58 Chapter 3. The Interactive Shell Sage Tutorial, Release 5. 3 Another feature that is very much in the spirit of the open source nature of Sage is that if f is a Python function, then typing f?? displays the source code that de? nes f. For example, sage: V = QQ^3 sage: V. coordinates?? Type: instancemethod Source: def coordinates(self, v): “”” Write $v$ in terms of the basis for self. “”” return self. coordinate_vector(v). list()

This tells us that all the coordinates function does is call the coordinate_vector function and change the result into a list. What does the coordinate_vector function do? sage: V = QQ^3 sage: V. coordinate_vector?? def coordinate_vector(self, v): return self. ambient_vector_space()(v) The coordinate_vector function coerces its input into the ambient space, which has the effect of computing the vector of coef? cients of v in terms of V . The space V is already ambient since it’s just Q3 . There is also a coordinate_vector function for subspaces, and it’s different.

We create a subspace and see: sage: V = QQ^3; W = V. span_of_basis([V. 0, V. 1]) sage: W. coordinate_vector?? def coordinate_vector(self, v): “”” “”” # First find the coordinates of v wrt echelon basis. w = self. echelon_coordinate_vector(v) # Next use transformation matrix from echelon basis to # user basis. T = self. echelon_to_user_matrix() return T. linear_combination_of_rows(w) (If you think the implementation is inef? cient, please sign up to help optimize linear algebra. ) You may also type help(command_name) or help(class) for a manpage-like help ? le about a given class. age: help(VectorSpace) Help on class VectorSpace class VectorSpace(__builtin__. object) | Create a Vector Space. | | To create an ambient space over a field with given dimension | using the calling syntax : : When you type q to exit the help system, your session appears just as it was. The help listing does not clutter up your session, unlike the output of function_name? sometimes does. It’s particularly helpful to type 3. 8. Integrated Help System 59 Sage Tutorial, Release 5. 3 help(module_name). For example, vector spaces are de? ned in sage. modules. free_module, so type help(sage. modules. ree_module) for documentation about that whole module. When viewing documentation using help, you can search by typing / and in reverse by typing ?. 3. 9 Saving and Loading Individual Objects Suppose you compute a matrix or worse, a complicated space of modular symbols, and would like to save it for later use. What can you do? There are several approaches that computer algebra systems take to saving individual objects. 1. Save your Game: Only support saving and loading of complete sessions (e. g. , GAP, Magma). 2. Uni? ed Input/Output: Make every object print in a way that can be read back in (GP/PARI). 3.

Eval: Make it easy to evaluate arbitrary code in the interpreter (e. g. , Singular, PARI). Because Sage uses Python, it takes a different approach, which is that every object can be serialized, i. e. , turned into a string from which that object can be recovered. This is in spirit similar to the uni? ed I/O approach of PARI, except it doesn’t have the drawback that objects print to screen in too complicated of a way. Also, support for saving and loading is (in most cases) completely automatic, requiring no extra programming; it’s simply a feature of Python that was designed into the language from the ground up.

Almost all Sage objects x can be saved in compressed form to disk using save(x, filename) (or in many cases x. save(filename)). To load the object back in, use load(filename). sage: sage: [ 15 [ 42 [ 69 sage: A = MatrixSpace(QQ,3)(range(9))^2 A 18 21] 54 66] 90 111] save(A, ’A’) You should now quit Sage and restart. Then you can get A back: sage: sage: [ 15 [ 42 [ 69 A = load(’A’) A 18 21] 54 66] 90 111] You can do the same with more complicated objects, e. g. , elliptic curves. All data about the object that is cached is stored with the object. For example, sage: sage: sage: sage: E = EllipticCurve(’11a’) v = E. nlist(100000) save(E, ’E’) quit # takes a while The saved version of E takes 153 kilobytes, since it stores the ? rst 100000 an with it. ~/tmp$ ls -l E. sobj -rw-r–r– 1 was was 153500 2006-01-28 19:23 E. sobj ~/tmp$ sage [ ] sage: E = load(’E’) sage: v = E. anlist(100000) # instant! (In Python, saving and loading is accomplished using the cPickle module. In particular, a Sage object x can be saved via cPickle. dumps(x, 2). Note the 2! ) 60 Chapter 3. The Interactive Shell Sage Tutorial, Release 5. 3 Sage cannot save and load individual objects created in some other computer algebra systems, e. . , GAP, Singular, Maxima, etc. They reload in a state marked “invalid”. In GAP, though many objects print in a form from which they can be reconstructed, many don’t, so reconstructing from their print representation is purposely not allowed. sage: a = gap(2) sage: a. save(’a’) sage: load(’a’) Traceback (most recent call last): ValueError: The session in which this object was defined is no longer running. GP/PARI objects can be saved and loaded since their print representation is enough to reconstruct them. sage: a = gp(2) sage: a. save(’a’) sage: load(’a’) 2

Saved objects can be re-loaded later on computers with different architectures or operating systems, e. g. , you could save a huge matrix on 32-bit OS X and reload it on 64-bit Linux, ? nd the echelon form, then move it back. Also, in many cases you can even load objects into versions of Sage that are different than the versions they were saved in, as long as the code for that object isn’t too different. All the attributes of the objects are saved, along with the class (but not source code) that de? nes the object. If that class no longer exists in a new version of Sage, then the object can’t be reloaded in that newer version.

But you could load it in an old version, get the objects dictionary (with x. __dict__), and save the dictionary, and load that into the newer version. 3. 9. 1 Saving as Text You can also save the ASCII text representation of objects to a plain text ? le by simply opening a ? le in write mode and writing the string representation of the object (you can write many objects this way as well). When you’re done writing objects, close the ? le. sage: sage: sage: sage: sage: R. = PolynomialRing(QQ,2) f = (x+y)^7 o = open(’file. txt’,’w’) o. write(str(f)) o. close() 3. 10 Saving and Loading Complete Sessions Sage has very ? xible support for saving and loading complete sessions. The command save_session(sessionname) saves all the variables you’ve de? ned in the current session as a dictionary in the given sessionname. (In the rare case when a variable does not support saving, it is simply not saved to the dictionary. ) The resulting ? le is an . sobj ? le and can be loaded just like any other object that was saved. When you load the objects saved in a session, you get a dictionary whose keys are the variables names and whose values are the objects. You can use the load_session(sessionname) command to load the variables de? ed in sessionname into the current session. Note that this does not wipe out variables you’ve already de? ned in your current session; instead, the two sessions are merged. First we start Sage and de? ne some variables. 3. 10. Saving and Loading Complete Sessions 61 Sage Tutorial, Release 5. 3 sage: sage: sage: sage: _4 = E = EllipticCurve(’11a’) M = ModularSymbols(37) a = 389 t = M. T(2003). matrix(); t. charpoly(). factor() (x – 2004) * (x – 12)^2 * (x + 54)^2 Next we save our session, which saves each of the above variables into a ? le. Then we view the ? le, which is about 3K in size. age: save_session(’misc’) Saving a Saving M Saving t Saving E sage: quit [email protected]:~/tmp$ ls -l misc. sobj -rw-r–r– 1 was was 2979 2006-01-28 19:47 misc. sobj Finally we restart Sage, de? ne an extra variable, and load our saved session. sage: b = 19 sage: load_session(’misc’) Loading a Loading M Loading E Loading t Each saved variable is again available. Moreover, the variable b was not overwritten. sage: M Full Modular Symbols space for Gamma_0(37) of weight 2 with sign 0 and dimension 5 over Rational Field sage: E Elliptic Curve defined by y^2 + y = x^3 – x^2 – 10*x – 20 over Rational Field sage: b 19 sage: a 389 3. 1 The Notebook Interface The Sage notebook is run by typing sage: notebook() on the command line of Sage. This starts the Sage notebook and opens your default web browser to view it. The server’s state ? les are stored in $HOME/. sage/sage\_notebook. Other options include: sage: notebook(“directory”) which starts a new notebook server using ? les in the given directory, instead of the default directory $HOME/. sage/sage_notebook. This can be useful if you want to have a collection of worksheets associated with a speci? c project, or run several separate notebook servers at the same time. When you start the notebook, it ? st creates the following ? les in $HOME/. sage/sage_notebook: 62 Chapter 3. The Interactive Shell Sage Tutorial, Release 5. 3 nb. sobj objects/ worksheets/ (the notebook SAGE object file) (a directory containing SAGE objects) (a directory containing SAGE worksheets). After creating the above ? les, the notebook starts a web server. A “notebook” is a collection of user accounts, each of which can have any number of worksheets. When you create a new worksheet, the data that de? nes it is stored in the worksheets/username/number directories. In each such directory there is a plain text ? le worksheet. xt – if anything ever happens to your worksheets, or Sage, or whatever, that human-readable ? le contains everything needed to reconstruct your worksheet. From within Sage, type notebook? for much more about how to start a notebook server. The following diagram illustrates the architecture of the Sage Notebook: ———————| | | | | firefox/safari | | | | javascript | | program | | | | | ———————| ^ | AJAX | V | ———————| | | sage | | web | ————> | server | pexpect | | | | ———————- SAGE process 1 SAGE process 2 SAGE process 3 (Python processes)

For help on a Sage command, cmd, in the notebook browser box, type cmd? ). and now hit (not For help on the keyboard shortcuts available in the notebook interface, click on the Help link. 3. 11. The Notebook Interface 63 Sage Tutorial, Release 5. 3 64 Chapter 3. The Interactive Shell CHAPTER FOUR INTERFACES A central facet of Sage is that it supports computation with objects in many different computer algebra systems “under one roof” using a common interface and clean programming language. The console and interact methods of an interface do very different things. For example, using GAP as an example: 1. gap. onsole(): This opens the GAP console – it transfers control to GAP. Here Sage is serving as nothing more than a convenient program launcher, similar to the Linux bash shell. 2. gap. interact(): This is a convenient way to interact with a running GAP instance that may be “full of” Sage objects. You can import Sage objects into this GAP session (even from the interactive interface), etc. 4. 1 GP/PARI PARI is a compact, very mature, highly optimized C program whose primary focus is number theory. There are two very distinct interfaces that you can use in Sage: • gp – the “G o P ARI” interpreter, and • pari – the PARI C libraxry.

For example, the following are two ways of doing the same thing. They look identical, but the output is actually different, and what happens behind the scenes is drastically different. sage: gp(’znprimroot(10007)’) Mod(5, 10007) sage: pari(’znprimroot(10007)’) Mod(5, 10007) In the ? rst case, a separate copy of the GP interpreter is started as a server, and the string ’znprimroot(10007)’ is sent to it, evaluated by GP, and the result is assigned to a variable in GP (which takes up space in the child GP processes memory that won’t be freed). Then the value of that variable is displayed.

In the second case, no separate program is started, and the string ’znprimroot(10007)’ is evaluated by a certain PARI C library function. The result is stored in a piece of memory on the Python heap, which is freed when the variable is no longer referenced. The objects have different types: sage: type(gp(’znprimroot(10007)’)) sage: type(pari(’znprimroot(10007)’)) So which should you use? It depends on what you’re doing. The GP interface can do absolutely anything you could do in the usual GP/PARI command line program, since it is running that program. In particular, you can load complicated PARI programs and run them.

In contrast, the PARI interface (via the C library) is much more restrictive. First, not all 65 Sage Tutorial, Release 5. 3 member functions have been implemented. Second, a lot of code, e. g. , involving numerical integration, won’t work via the PARI interface. That said, the PARI interface can be signi? cantly faster and more robust than the GP one. (If the GP interface runs out of memory evaluating a given input line, it will silently and automatically double the stack size and retry that input line. Thus your computation won’t crash if you didn’t correctly anticipate the amount of memory that would be needed.

This is a nice trick the usual GP interpreter doesn’t seem to provide. Regarding the PARI C library interface, it immediately copies each created object off of the PARI stack, hence the stack never grows. However, each object must not exceed 100MB in size, or the stack will over? ow when the object is being created. This extra copying does impose a slight performance penalty. ) In summary, Sage uses the PARI C library to provide functionality similar to that provided by the GP/PARI interpreter, except with different sophisticated memory management and the Python programming language. First we create a PARI list from a Python list. age: v = pari([1,2,3,4,5]) sage: v [1, 2, 3, 4, 5] sage: type(v) Every PARI object is of type py_pari. gen. The PARI type of the underlying object can be obtained using the type member function. sage: v. type() ’t_VEC’ In PARI, to create an elliptic curve we enter ellinit([1,2,3,4,5]). Sage is similar, except that ellinit is a method that can be called on any PARI object, e. g. , our t\_VEC v. sage: e = v. ellinit() sage: e. type() ’t_VEC’ sage: pari(e)[:13] [1, 2, 3, 4, 5, 9, 11, 29, 35, -183, -3429, -10351, 6128487/10351] Now that we have an elliptic curve object, we can compute some things about it. age: e. elltors() [1, [], []] sage: e. ellglobalred() [10351, [1, -1, 0, -1], 1] sage: f = e. ellchangecurve([1,-1,0,-1]) sage: f[:5] [1, -1, 0, 4, 3] 4. 2 GAP Sage comes with GAP 4. 4. 10 for computational discrete mathematics, especially group theory. Here’s an example of GAP’s IdGroup function, which uses the optional small groups database that has to be installed separately, as explained below. sage: G = gap(’Group((1,2,3)(4,5), (3,4))’) sage: G Group( [ (1,2,3)(4,5), (3,4) ] ) sage: G. Center() Group( () ) 66 Chapter 4. Interfaces Sage Tutorial, Release 5. 3 sage: G.

IdGroup() [ 120, 34 ] sage: G. Order() 120 # requires optional database_gap package We can do the same computation in Sage without explicitly invoking the GAP interface as follows: sage: G = PermutationGroup([[(1,2,3),(4,5)],[(3,4)]]) sage: G. center() Subgroup of (Permutation Group with generators [(3,4), (1,2,3)(4,5)]) generated by [()] sage: G. group_id() # requires optional database_gap package [120, 34] sage: n = G. order(); n 120 (For some GAP functionality, you should install two optional Sage packages. Type sage -optional for a list and choose the one that looks like gap\_packages-x. . z, then type sage -i gap\_packages-x. y. z. Do the same for database\_gap-x. y. z. Some non-GPL’d GAP packages may be installed by downloading them from the GAP web site [GAPkg], and unpacking them in $SAGE_ROOT/local/lib/gap-4. 4. 10/pkg. ) 4. 3 Singular Singular provides a massive and mature library for Grobner bases, multivariate polynomial gcds, bases of RiemannRoch spaces of a plane curve, and factorizations, among other things. We illustrate multivariate polynomial factorization using the Sage interface to Singular (do not type the ): sage: R1 = singular. ing(0, ’(x,y)’, ’dp’) sage: R1 // characteristic : 0 // number of vars : 2 // block 1 : ordering dp // : names x y // block 2 : ordering C sage: f = singular(’9*y^8 – 9*x^2*y^7 – 18*x^3*y^6 – 18*x^5*y^6 + 9*x^6*y^4 + 18*x^7*y^5 + 36*x^8*y^4 + 9*x^10*y^4 – 18*x^11*y^2 – 9*x^12*y^3 – 18*x^13*y^2 + 9*x^16’) Now that we have de? ned f , we print it and factor. sage: f 9*x^16-18*x^13*y^2-9*x^12*y^3+9*x^10*y^4-18*x^11*y^2+36*x^8*y^4+18*x^7*y^5-18*x^5*y^6+9*x^6*y^4-18*x^ sage: f. parent() Singular sage: F = f. factorize(); F [1]: _[1]=9 _[2]=x^6-2*x^3*y^2-x^2*y^3+y^4 _[3]=-x^5+y^2 [2]: 1,1,2 sage: F[1][2] x^6-2*x^3*y^2-x^2*y^3+y^4

As with the GAP example in GAP, we can compute the above factorization without explicitly using the Singular interface (however, behind the scenes Sage uses the Singular interface for the actual computation). Do not type the : 4. 3. Singular 67 Sage Tutorial, Release 5. 3 sage: sage: sage: (9) * x, y = QQ[’x, y’]. gens() f = 9*y^8 – 9*x^2*y^7 – 18*x^3*y^6 – 18*x^5*y^6 + 9*x^6*y^4 + 18*x^7*y^5 + 36*x^8*y^4 + 9*x^10*y^4 – 18*x^11*y^2 – 9*x^12*y^3 – 18*x^13*y^2 + 9*x^16 factor(f) (-x^5 + y^2)^2 * (x^6 – 2*x^3*y^2 – x^2*y^3 + y^4) 4. 4 Maxima Maxima is included with Sage, as well as a Lisp implementation.

The gnuplot package (which Maxima uses by default for plotting) is distributed as a Sage optional package. Among other things, Maxima does symbolic manipulation. Maxima can integrate and differentiate functions symbolically, solve 1st order ODEs, most linear 2nd order ODEs, and has implemented the Laplace tr

Categories
Free Essays

Data Security and Integrity: Software and Physical Restrictions

Table of Contents Page # Introduction Introduction Maintenance of data security and integrity in reference to: Software Access Restrictions These are inserted for the purposes of protecting computer software. A few forms of software access restrictions are as follows: Passwords Definition: -A string of characters that allows access to a computer, interface or system. How does it assist in securing data and maintaining its integrity? When a person creates a password for access to a computer, folder, program etc. they are creating a code that must be implemented every time they wish to access the software. This means that if any unknown or unauthorized personnel were to attempt to view the material and were unaware of the password then they would be unable to do so, thus securing the data.

The integrity of the data is also highly protected when using a password because if any unknown or unauthorized personnel attempt to access any data that is password protected without knowing the password, they will be denied access. Thus the data cannot be altered in any way and its trustworthiness would remain the same. * Data Encryption Definition: -This is the encryption (encoding) of data for security purposes. How does it assist in securing data and maintaining its integrity? By encrypting, we change the original plaintext version of data into ciphertext, which is an unreadable format that will protect against unauthorized parties.

Only those who possess the key to trigger the algorithm that will decrypt the data, hence making it readable, can access it. A higher bit encryption is much more secure than a lower bit encryption, for example a 256 bit encryption is much more secure than a 128 bit encryption because a hacker will need to try out more possibilities when trying to breach the encryption. Once data is encrypted the integrity of it is safeguarded just as long as it isn’t breached by a hacker or accessed by any unauthorized party who somehow got the key to the algorithm, and was able to decrypt the data. Virus Protection Definition: -This is the protecting of a system from a file that replicates itself without the consent of the user. How does it assist in securing data and maintaining its integrity? Typical anti-virus software protects a computer system from viruses, Trojan horses, worms etc. by means of routinely or manually scanning files and programs to check for the aforementioned malware and if any malicious content is found, it either notifies the user of its presence and suggests steps that can be taken to remove it, or automatically starts doing so by itself.

Any malware that is located early enough by anti-virus software can usually be removed before it can cause any irreversible damage to data. Though there are viruses that can take effect almost immediately and corrupt data very quickly before the virus protection can take action or even notice it. In this case having better virus protection software is necessary. * Firewall Definition: -This is an integrated collection of security measures designed to prevent unauthorized electronic access to a networked computer system.

How does it assist in securing data and maintaining its integrity? A firewall protects a computer system or network from any malicious activity from the internet, for e. g. hackers, viruses and Trojan horses. They do so by filtering any incoming packets of data to decide which data will be let through the firewall and which will be discarded. This means that data already on the computer or network will be better protected against hackers, viruses etc. and any incoming data will be ‘clean’ i. e. without any malicious software attached.

Firewalls assist in maintaining data integrity by its ability to filter data. As long as the firewall prevents malware from entering a computer system or network the data will not be adversely affected thus maintaining its trustworthiness. Physical Access Restrictions These consist of biometric systems as well as other forms of physical access restrictions that are used to protect data. A few forms of physical access restrictions are: * Voice Recognition Definition: -This is a device used for the identifying of individuals by the sound of their voice.

How does it assist in securing data and maintaining its integrity? Voice Recognition is a biometric system i. e. it identifies individuals by a unique human characteristic, their voice. The way this aids in protecting data is as follows: if someone wishes to gain access to something, in this case data, they would need to verify to a computer that they are permitted to view or manipulate the data by speaking. If they are not registered to gain access to the data then the computer would deny them the ability to view or interact with it, thereby maintaining its integrity.

If the speaker is registered with the voice recognition, then it would grant the speaker access. This allows data to be viewed by authorized personnel only. Voice recognition is considered to be more secure than passwords because of the fact that, instead of depending on a secret string of characters to gain entry to the data, it identifies them by their voice hence removing the possibility of guessing. * Retinal Scan Definition: -This is the biometrical identification of individuals by scanning the retina of the eye.

How does it assist in securing data and maintaining its integrity? A retinal scan operates similarly to voice recognition, this is because they are both biometric systems. The way this assists in securing data is as follows: The retina of the eye, which is unique to everyone, is scanned. The computer can identify people by the use of camera technology which can differentiate individuals from one another because of the subtle differences in their retina. If the person is recognized as authorized personnel, then they are allowed to view the data.

If however, they are not authorized to view the data, they will be denied access from doing so as well as from manipulating it, this allows for the safekeeping of data and the maintenance of its integrity. * Fingerprint Recognition Definition: -This is the automated method of verifying a match between two human fingerprints. How does it assist in securing data and maintaining its integrity? This is another biometric system, and the most popular one at that. Fingerprint recognition is widely used for the protection of data because of its accuracy and dependability in identification.

Just as there are no two people who share the exact same voice or retina, there are no two people who share the exact same fingerprints. Because of this, fingerprint recognition can be used to allow someone access to data once the person accessing the data places his finger onto the scanning device. If his fingerprints match those that are registered then he/she is allowed access to the data, otherwise access will be denied. * Fireproof Cabinets Definition: -A fire resistant cupboard/box which can house computer hardware that contains data. How does it assist in securing data and maintaining its integrity?

Data is located inside software, however software cannot function without hardware. Therefore, in order to protect the software which houses data, we must also protect the hardware which houses the software. Fireproof cabinets are an excellent way of doing so. They assist in securing the hardware by allowing it a special area so that it is not lost or misplaced. Also, if there is a fire which could potentially be disastrous, the fire resistant cabinets will protect the hardware from being destroyed while also indirectly protecting the data. Summary Bibliography

Categories
Free Essays

Software Testing and School University Graduate

Resume/CV Template Contact information Name Address Telephone Cell Phone Email Personal information Date of Birth Sex Optional personal information Marital Status Spouse’s Name Children Employment history List in chronological order, include position details and dates Work History Academic Positions Research and training Education Include dates, majors, and details of degrees, training and certification High School University Graduate School Post-Doctoral Training Professional qualifications Certifications and Accreditations Computer Skills

Awards – if any Publications – if any Professional memberships – if any Interests Sample academic recommendation letter DATE: 20th August 2010 From Mr. Your professor name Lecturer, Department of Science, Your College, Chennai – 600018, India. | To whom so ever it may concern| Mr. Your name was my student during his undergraduate program. He is intelligent, hardworking and motivated student. His power of assimilation and his ability to grasp new concepts is good. His enthusiasm for work was conspicuous and he is proved himself to be a natural leader.

Besides, he is also good in English language skills and has taken part in many debates and other literary activities. He has bagged many prizes in state-level inter collegiate contests. I am very sure that he will put forth all his efforts into any task he confronts. His positive outlook, capability to work with peers and his willingness to learn from his experimental situation bear testimony that he will do very well in this post graduate studies in your renowned institution. I strongly recommend Mr. Your name for admission to the post graduate program of your university.

Thanking you, Yours sincerely (Mr. Your Lecturer name) Sample Work recommendation letter DATE: 12th June 2010 From Mr. Your professor name Project leader, Department of IT, Your Employer, Chennai – 600018, India. | To whom so ever it may concern| It is my great pleasure to write a letter of recommendation on behalf of Mr. Your name, for admission into your postgraduate program. I have know Mr. Your name since july 2007 in my capacity as senior HR executive with Your company name PVT LTd. Mr.

Your name has exceeded expectations and has out – shined others in his work group. Mr. Your name strong work ethics, his ability to outperform and passion towards service excellence will be a value addition for your program. I am confident that Mr. Your name will be serious and enthusiastic candidate and someday a quiet successful senior level manager or entrepreneur that would be proud to call an alumini. If you need any additional information, please feel free to contact me over the phone or via email. Sincerely YOUR EMPLOYER NAME)| | Statement of purpose guidelines Review your essay by asking yourself the following questions: Are my goals well articulated? Do I explain why I have selected this school and/or program? Do I demonstrate knowledge of the program? Do I include interesting details that prove my claims about myself? Is my tone confident? Do’s & Don’ts Don’ts 1. Please give more importance in writing SOP, most of them takes it very easy 2. Don’t underestimate the length of time it will take to write your statement of purpose 3.

Don’t give your SOP work to be done by someone else 4. Don’t include all your activities, just something which is more important is enough 5. Don’t use any particular name of universities; always prepare a general SOP that can be used for many universities 6. Avoid lengthy personal or philosophical discussions unless the instructions specifically ask for them 7. Don’t exceed more than two pages 8. Avoid grammatical or spelling mistakes this will show your carelessness is writing this SOP. Do’s 1. Always prepare a rough draft or outline of topics.

Mostly the topics will include professional career goals, academic interests, research experience, practical experience, special skill sets and reason for choosing a course 2. Always find the course requirement from the university and stick to the points which are specified by university 3. Emphasize everything from a positive perspective and write in an active voice 4. Demonstrate your skill sets by experience 5. Your essay should be well organised and everything is linked with continuity and focus 6. Pay more attention to first paragraph 7.

Ask others to proof read your statement of purpose. Sample Statement of Purpose Glad to introduce myself as Mr. YOUR NAME, a Software Engineer at present, with 5 years of experience in Software Testing, Quality Analysis and Management. My career is my passion, and it holds my fullest devotion, dedication and commitment. I belong to the field of IT Services Management, which had been the dream I had, the obsession in me and the long term desire that had lingered in me for a long while. And finally, when I got into this, I could give nothing from me but the best.

To add more value to the same, I have decided with the best thought of doing my Masters Degree from a reputed institution which would give me not only a degree but also a new style of learning with international standards, innovative methods of self-development skills and the ability of survival among the fittest. Hence, I was left with no other ideas and suggestions from experts, other than to join you, the Bolton University, which I believe should shape me and make myself qualifiedly fit for the IT Services Management skills and to the progress in the modern globalized cultures, technology and era.

I hold my Under-Graduate degree, B. Sc in Computer Science which I had completed in the period June 2001 to May 2004 from the YOUR University, Tamil Nadu. I pursued my UG Degree in the YOUR College, Rajapalayam. The syllabus covered here gave me immense knowledge on Software Development, Software Building, Software Techniques, Hardware Configurations, Mathematical skills and Personality Development. I was elected to be the Students’ Chairman at my final year 2003-2004 which had inspired me to know more about the management skills.

I was an ardent speaker, athlete and player at my college and finally passed out with prominence winning the “Best Outgoing Student” Award at my final year. As a starting point in my career, I got employed as a Software Test Engineer with the YOUR PLOYER technologies, Chennai during the period June 2004. It was here that my zeal had taken an initial contour. I was poured with surplus opportunities around me to learn, to explore, to build, to experiment, to renovate and to give a shape to myself. I learnt the concepts of IT services practically.

I gained buoyancy in myself. And with that hope I moved to Accenture YOUR EMPLOYER Pvt Ltd, Chennai during the period August 2006. I was promoted as a Senior Programmer. From that time until now, I keep on renewing myself to the changes in the technologies, to the new ways novelty, the exciting facts of Software Testing and many more. I got certified with the National Stock Exchange of India in Financial Services and Capital Markets. I got certified with the IBM Services in Rational Functional Tester tool.

I got certified with the HP Services in Quality Center – Defect Management tool and the Quick Test Professional tool. I am also certified with the International Software Testing Qualification Board, as a certified Manual Software Tester. I had won the Celebrating Performance Award from Accenture thrice, for having achieved extreme satisfaction levels from the clients, building my technical skills and for the professionalism I depict in my job. But my journey towards success is still a few more miles away.

I need to sparkle in my career with a Masters Degree in my relevant field – the Services Management, without which my career would not be fulfilled. For this to occur, I need your help, your support and your guidance. The Bolton University gives its students a degree with a dignified knowledge of survival amidst the global standards and also makes you learn the professional development skills in creative leadership. I wish to be a part with you to develop myself in many such areas. Therefore, I request you to accept my purpose and make me move ahead in my career with more confidence and venerable knowledge.

Categories
Free Essays

Why Software Should Be Free

Why Software Should Be Free by Richard Stallman (Version of April 24, 1992) Introduction The existence of software inevitably raises the question of how decisions about its use should be made. For example, suppose one individual who has a copy of a program meets another who would like a copy. It is possible for them to copy the program; who should decide whether this is done? The individuals involved? Or another party, called the “owner”? Software developers typically consider these questions on the assumption that the criterion for the answer is to maximize developers’ profits.

The political power of business has led to the government adoption of both this criterion and the answer proposed by the developers: that the program has an owner, typically a corporation associated with its development. I would like to consider the same question using a different criterion: the prosperity and freedom of the public in general. This answer cannot be decided by current law–the law should conform to ethics, not the other way around. Nor does current practice decide this question, although it may suggest possible answers.

The only way to judge is to see who is helped and who is hurt by recognizing owners of software, why, and how much. In other words, we should perform a cost-benefit analysis on behalf of society as a whole, taking account of individual freedom as well as production of material goods. In this essay, I will describe the effects of having owners, and show that the results are detrimental. My conclusion is that programmers have the duty to encourage others to share, redistribute, study, and improve the software we write: in other words, to write “free” software. 1) How Owners Justify Their Power Those who benefit from the current system where programs are property offer two arguments in support of their claims to own programs: the emotional argument and the economic argument. The emotional argument goes like this: “I put my sweat, my heart, my soul into this program. It comes from me, it’s mine! ” This argument does not require serious refutation. The feeling of attachment is one that programmers can cultivate when it suits them; it is not inevitable. Consider, for example, how willingly the same programmers sually sign over all rights to a large corporation for a salary; the emotional attachment mysteriously vanishes. By contrast, consider the great artists and artisans of medieval times, who didn’t even sign their names to their work. To them, the name of the artist was not important. What mattered was that the work was done–and the purpose it would serve. This view prevailed for hundreds of years. The economic argument goes like this: “I want to get rich (usually described inaccurately as `making a living’), and if you don’t allow me to get rich by programming, then I won’t program.

Everyone else is like me, so nobody will ever program. And then you’ll be stuck with no programs at all! ” This threat is usually veiled as friendly advice from the wise. I’ll explain later why this threat is a bluff. First I want to address an implicit assumption that is more visible in another formulation of the argument. This formulation starts by comparing the social utility of a proprietary program with that of no program, and then concludes that proprietary software development is, on the whole, beneficial, and should be encouraged.

The fallacy here is in comparing only two outcomes–proprietary software vs. no software–and assuming there are no other possibilities. Given a system of software copyright, software development is usually linked with the existence of an owner who controls the software’s use. As long as this linkage exists, we are often faced with the choice of proprietary software or none. However, this linkage is not inherent or inevitable; it is a consequence of the specific social/legal policy decision that we are questioning: the decision to have owners.

To formulate the choice as between proprietary software vs. no software is begging the question. The Argument against Having Owners The question at hand is, “Should development of software be linked with having owners to restrict the use of it? ” In order to decide this, we have to judge the effect on society of each of those two activities independently: the effect of developing the software (regardless of its terms of distribution), and the effect of restricting its use (assuming the software has been developed).

If one of these activities is helpful and the other is harmful, we would be better off dropping the linkage and doing only the helpful one. To put it another way, if restricting the distribution of a program already developed is harmful to society overall, then an ethical software developer will reject the option of doing so. To determine the effect of restricting sharing, we need to compare the value to society of a restricted (i. e. , proprietary) program with that of the same program, available to everyone. This means comparing two possible worlds.

This analysis also addresses the simple counterargument sometimes made that “the benefit to the neighbor of giving him or her a copy of a program is cancelled by the harm done to the owner. ” This counterargument assumes that the harm and the benefit are equal in magnitude. The analysis involves comparing these magnitudes, and shows that the benefit is much greater. To elucidate this argument, let’s apply it in another area: road construction. It would be possible to fund the construction of all roads with tolls.

This would entail having toll booths at all street corners. Such a system would provide a great incentive to improve roads. It would also have the virtue of causing the users of any given road to pay for that road. However, a toll booth is an artificial obstruction to smooth driving-artificial, because it is not a consequence of how roads or cars work. Comparing free roads and toll roads by their usefulness, we find that (all else being equal) roads without toll booths are cheaper to construct, cheaper to run, safer, and more efficient to use. 2) In a poor country, tolls may make the roads unavailable to many citizens. The roads without toll booths thus offer more benefit to society at less cost; they are preferable for society. Therefore, society should choose to fund roads in another way, not by means of toll booths. Use of roads, once built, should be free. When the advocates of toll booths propose them as merely a way of raising funds, they distort the choice that is available. Toll booths do raise funds, but they do something else as well: in effect, they degrade the road.

The toll road is not as good as the free road; giving us more or technically superior roads may not be an improvement if this means substituting toll roads for free roads. Of course, the construction of a free road does cost money, which the public must somehow pay. However, this does not imply the inevitability of toll booths. We who must in either case pay will get more value for our money by buying a free road. I am not saying that a toll road is worse than no road at all. That would be true if the toll were so great that hardly anyone used the road–but this is an unlikely policy for a toll collector.

However, as long as the toll booths cause significant waste and inconvenience, it is better to raise the funds in a less obstructive fashion. To apply the same argument to software development, I will now show that having “toll booths” for useful software programs costs society dearly: it makes the programs more expensive to construct, more expensive to distribute, and less satisfying and efficient to use. It will follow that program construction should be encouraged in some other way. Then I will go on to explain other methods of encouraging and (to the extent actually necessary) funding software development.

The Harm Done by Obstructing Software Consider for a moment that a program has been developed, and any necessary payments for its development have been made; now society must choose either to make it proprietary or allow free sharing and use. Assume that the existence of the program and its availability is a desirable thing. (3) Restrictions on the distribution and modification of the program cannot facilitate its use. They can only interfere. So the effect can only be negative. But how much? And what kind? Three different levels of material harm come from such obstruction: • • • Fewer people use the program.

None of the users can adapt or fix the program. Other developers cannot learn from the program, or base new work on it. Each level of material harm has a concomitant form of psychosocial harm. This refers to the effect that people’s decisions have on their subsequent feelings, attitudes, and predispositions. These changes in people’s ways of thinking will then have a further effect on their relationships with their fellow citizens, and can have material consequences. The three levels of material harm waste part of the value that the program could contribute, but they cannot reduce it to zero.

If they waste nearly all the value of the program, then writing the program harms society by at most the effort that went into writing the program. Arguably a program that is profitable to sell must provide some net direct material benefit. However, taking account of the concomitant psychosocial harm, there is no limit to the harm that proprietary software development can do. Obstructing Use of Programs The first level of harm impedes the simple use of a program. A copy of a program has nearly zero marginal cost (and you can pay this cost by doing the work yourself), so in a free market, it would have nearly zero price.

A license fee is a significant disincentive to use the program. If a widely-useful program is proprietary, far fewer people will use it. It is easy to show that the total contribution of a program to society is reduced by assigning an owner to it. Each potential user of the program, faced with the need to pay to use it, may choose to pay, or may forego use of the program. When a user chooses to pay, this is a zero-sum transfer of wealth between two parties. But each time someone chooses to forego use of the program, this harms that person without benefitting anyone. The sum of negative numbers and zeros must be negative.

But this does not reduce the amount of work it takes to develop the program. As a result, the efficiency of the whole process, in delivered user satisfaction per hour of work, is reduced. This reflects a crucial difference between copies of programs and cars, chairs, or sandwiches. There is no copying machine for material objects outside of science fiction. But programs are easy to copy; anyone can produce as many copies as are wanted, with very little effort. This isn’t true for material objects because matter is conserved: each new copy has to be built from raw materials in the same way that the first copy was built.

With material objects, a disincentive to use them makes sense, because fewer objects bought means less raw material and work needed to make them. It’s true that there is usually also a startup cost, a development cost, which is spread over the production run. But as long as the marginal cost of production is significant, adding a share of the development cost does not make a qualitative difference. And it does not require restrictions on the freedom of ordinary users. However, imposing a price on something that would otherwise be free is a qualitative change.

A centrally-imposed fee for software distribution becomes a powerful disincentive. What’s more, central production as now practiced is inefficient even as a means of delivering copies of software. This system involves enclosing physical disks or tapes in superfluous packaging, shipping large numbers of them around the world, and storing them for sale. This cost is presented as an expense of doing business; in truth, it is part of the waste caused by having owners. Damaging Social Cohesion Suppose that both you and your neighbor would find it useful to run a certain program.

In ethical concern for your neighbor, you should feel that proper handling of the situation will enable both of you to use it. A proposal to permit only one of you to use the program, while restraining the other, is divisive; neither you nor your neighbor should find it acceptable. Signing a typical software license agreement means betraying your neighbor: “I promise to deprive my neighbor of this program so that I can have a copy for myself. ” People who make such choices feel internal psychological pressure to justify them, by downgrading the importance of helping one’s neighbors–thus public spirit suffers.

This is psychosocial harm associated with the material harm of discouraging use of the program. Many users unconsciously recognize the wrong of refusing to share, so they decide to ignore the licenses and laws, and share programs anyway. But they often feel guilty about doing so. They know that they must break the laws in order to be good neighbors, but they still consider the laws authoritative, and they conclude that being a good neighbor (which they are) is naughty or shameful. That is also a kind of psychosocial harm, but one can escape it by deciding that these licenses and laws have no moral force.

Programmers also suffer psychosocial harm knowing that many users will not be allowed to use their work. This leads to an attitude of cynicism or denial. A programmer may describe enthusiastically the work that he finds technically exciting; then when asked, “Will I be permitted to use it? ”, his face falls, and he admits the answer is no. To avoid feeling discouraged, he either ignores this fact most of the time or adopts a cynical stance designed to minimize the importance of it. Since the age of Reagan, the greatest scarcity in the United States is not technical innovation, but rather the willingness to work together for the public good.

It makes no sense to encourage the former at the expense of the latter. Obstructing Custom Adaptation of Programs The second level of material harm is the inability to adapt programs. The ease of modification of software is one of its great advantages over older technology. But most commercially available software isn’t available for modification, even after you buy it. It’s available for you to take it or leave it, as a black box–that is all. A program that you can run consists of a series of numbers whose meaning is obscure. No one, not even a good programmer, can easily change the numbers o make the program do something different. Programmers normally work with the “source code” for a program, which is written in a programming language such as Fortran or C. It uses names to designate the data being used and the parts of the program, and it represents operations with symbols such as `+’ for addition and `-‘ for subtraction. It is designed to help programmers read and change programs. Here is an example; a program to calculate the distance between two points in a plane: float distance (p0, p1) struct point p0, p1; { float xdist = p1. x – p0. x; float ydist = p1. y – p0. ; return sqrt (xdist * xdist + ydist * ydist); } Here is the same program in executable form, on the computer I normally use: 1314258944 1411907592 -234880989 1644167167 572518958 -232267772 -231844736 -234879837 -3214848 -803143692 -231844864 2159150 -234879966 1090581031 1314803317 1634862 1420296208 -232295424 1962942495 Source code is useful (at least potentially) to every user of a program. But most users are not allowed to have copies of the source code. Usually the source code for a proprietary program is kept secret by the owner, lest anybody else learn something from it.

Users receive only the files of incomprehensible numbers that the computer will execute. This means that only the program’s owner can change the program. A friend once told me of working as a programmer in a bank for about six months, writing a program similar to something that was commercially available. She believed that if she could have gotten source code for that commercially available program, it could easily have been adapted to their needs. The bank was willing to pay for this, but was not permitted to–the source code was a secret.

So she had to do six months of make-work, work that counts in the GNP but was actually waste. The MIT Artificial Intelligence Lab (AI Lab) received a graphics printer as a gift from Xerox around 1977. It was run by free software to which we added many convenient features. For example, the software would notify a user immediately on completion of a print job. Whenever the printer had trouble, such as a paper jam or running out of paper, the software would immediately notify all users who had print jobs queued. These features facilitated smooth operation.

Later Xerox gave the AI Lab a newer, faster printer, one of the first laser printers. It was driven by proprietary software that ran in a separate dedicated computer, so we couldn’t add any of our favorite features. We could arrange to send a notification when a print job was sent to the dedicated computer, but not when the job was actually printed (and the delay was usually considerable). There was no way to find out when the job was actually printed; you could only guess. And no one was informed when there was a paper jam, so the printer often went for an hour without being fixed.

The system programmers at the AI Lab were capable of fixing such problems, probably as capable as the original authors of the program. Xerox was uninterested in fixing them, and chose to prevent us, so we were forced to accept the problems. They were never fixed. Most good programmers have experienced this frustration. The bank could afford to solve the problem by writing a new program from scratch, but a typical user, no matter how skilled, can only give up. Giving up causes psychosocial harm–to the spirit of self-reliance. It is demoralizing to live in a house that you cannot rearrange to suit your needs.

It leads to resignation and discouragement, which can spread to affect other aspects of one’s life. People who feel this way are unhappy and do not do good work. Imagine what it would be like if recipes were hoarded in the same fashion as software. You might say, “How do I change this recipe to take out the salt? ” and the great chef would respond, “How dare you insult my recipe, the child of my brain and my palate, by trying to tamper with it? You don’t have the judgment to change my recipe and make it work right! ” “But my doctor says I’m not supposed to eat salt! What can I do? Will you take out the salt for me? ‘ “I would be glad to do that; my fee is only $50,000. ” Since the owner has a monopoly on changes, the fee tends to be large. “However, right now I don’t have time. I am busy with a commission to design a new recipe for ship’s biscuit for the Navy Department. I might get around to you in about two years. ” Obstructing Software Development The third level of material harm affects software development. Software development used to be an evolutionary process, where a person would take an existing program and rewrite parts of it for one new feature, and then another person would rewrite parts to add nother feature; in some cases, this continued over a period of twenty years. Meanwhile, parts of the program would be “cannibalized” to form the beginnings of other programs. The existence of owners prevents this kind of evolution, making it necessary to start from scratch when developing a program. It also prevents new practitioners from studying existing programs to learn useful techniques or even how large programs can be structured. Owners also obstruct education. I have met bright students in computer science who have never seen the source code of a large program.

They may be good at writing small programs, but they can’t begin to learn the different skills of writing large ones if they can’t see how others have done it. In any intellectual field, one can reach greater heights by standing on the shoulders of others. But that is no longer generally allowed in the software field–you can only stand on the shoulders of the other people in your own company. The associated psychosocial harm affects the spirit of scientific cooperation, which used to be so strong that scientists would cooperate even when their countries were at war.

In this spirit, Japanese oceanographers abandoning their lab on an island in the Pacific carefully preserved their work for the invading U. S. Marines, and left a note asking them to take good care of it. Conflict for profit has destroyed what international conflict spared. Nowadays scientists in many fields don’t publish enough in their papers to enable others to replicate the experiment. They publish only enough to let readers marvel at how much they were able to do. This is certainly true in computer science, where the source code for the programs reported on is usually secret.

It Does Not Matter How Sharing Is Restricted I have been discussing the effects of preventing people from copying, changing, and building on a program. I have not specified how this obstruction is carried out, because that doesn’t affect the conclusion. Whether it is done by copy protection, or copyright, or licenses, or encryption, or ROM cards, or hardware serial numbers, if it succeeds in preventing use, it does harm. Users do consider some of these methods more obnoxious than others. I suggest that the methods most hated are those that accomplish their objective.

Software Should be Free I have shown how ownership of a program–the power to restrict changing or copying it–is obstructive. Its negative effects are widespread and important. It follows that society shouldn’t have owners for programs. Another way to understand this is that what society needs is free software, and proprietary software is a poor substitute. Encouraging the substitute is not a rational way to get what we need. Vaclav Havel has advised us to “Work for something because it is good, not just because it stands a chance to succeed. ‘ A business making proprietary software stands a chance of success in its own narrow terms, but it is not what is good for society. Why People Will Develop Software If we eliminate copyright as a means of encouraging people to develop software, at first less software will be developed, but that software will be more useful. It is not clear whether the overall delivered user satisfaction will be less; but if it is, or if we wish to increase it anyway, there are other ways to encourage development, just as there are ways besides toll booths to raise money for streets.

Before I talk about how that can be done, first I want to question how much artificial encouragement is truly necessary. Programming is Fun There are some lines of work that few will enter except for money; road construction, for example. There are other fields of study and art in which there is little chance to become rich, which people enter for their fascination or their perceived value to society. Examples include mathematical logic, classical music, and archaeology; and political organizing among working people.

People compete, more sadly than bitterly, for the few funded positions available, none of which is funded very well. They may even pay for the chance to work in the field, if they can afford to. Such a field can transform itself overnight if it begins to offer the possibility of getting rich. When one worker gets rich, others demand the same opportunity. Soon all may demand large sums of money for doing what they used to do for pleasure. When another couple of years go by, everyone connected with the field will deride the idea that work would be done in the field without large financial returns.

They will advise social planners to ensure that these returns are possible, prescribing special privileges, powers, and monopolies as necessary to do so. This change happened in the field of computer programming in the past decade. Fifteen years ago, there were articles on “computer addiction”: users were “onlining” and had hundred-dollar-a-week habits. It was generally understood that people frequently loved programming enough to break up their marriages. Today, it is generally understood that no one would program except for a high rate of pay.

People have forgotten what they knew fifteen years ago. When it is true at a given time that most people will work in a certain field only for high pay, it need not remain true. The dynamic of change can run in reverse, if society provides an impetus. If we take away the possibility of great wealth, then after a while, when the people have readjusted their attitudes, they will once again be eager to work in the field for the joy of accomplishment. The question, “How can we pay programmers? ” becomes an easier question when we realize that it’s not a matter of paying them a fortune.

A mere living is easier to raise. Funding Free Software Institutions that pay programmers do not have to be software houses. Many other institutions already exist that can do this. Hardware manufacturers find it essential to support software development even if they cannot control the use of the software. In 1970, much of their software was free because they did not consider restricting it. Today, their increasing willingness to join consortiums shows their realization that owning the software is not what is really important for them.

Universities conduct many programming projects. Today they often sell the results, but in the 1970s they did not. Is there any doubt that universities would develop free software if they were not allowed to sell software? These projects could be supported by the same government contracts and grants that now support proprietary software development. It is common today for university researchers to get grants to develop a system, develop it nearly to the point of completion and call that “finished”, and then start companies where they really finish the project and make it usable.

Sometimes they declare the unfinished version “free”; if they are thoroughly corrupt, they instead get an exclusive license from the university. This is not a secret; it is openly admitted by everyone concerned. Yet if the researchers were not exposed to the temptation to do these things, they would still do their research. Programmers writing free software can make their living by selling services related to the software. I have been hired to port the GNU C compiler to new hardware, and to make user-interface extensions to GNU Emacs. (I offer these improvements to the public once they are done. I also teach classes for which I am paid. I am not alone in working this way; there is now a successful, growing corporation which does no other kind of work. Several other companies also provide commercial support for the free software of the GNU system. This is the beginning of the independent software support industry–an industry that could become quite large if free software becomes prevalent. It provides users with an option generally unavailable for proprietary software, except to the very wealthy. New institutions such as the Free Software Foundation can also fund programmers.

Most of the Foundation’s funds come from users buying tapes through the mail. The software on the tapes is free, which means that every user has the freedom to copy it and change it, but many nonetheless pay to get copies. (Recall that “free software” refers to freedom, not to price. ) Some users who already have a copy order tapes as a way of making a contribution they feel we deserve. The Foundation also receives sizable donations from computer manufacturers. The Free Software Foundation is a charity, and its income is spent on hiring as many programmers as possible.

If it had been set up as a business, distributing the same free software to the public for the same fee, it would now provide a very good living for its founder. Because the Foundation is a charity, programmers often work for the Foundation for half of what they could make elsewhere. They do this because we are free of bureaucracy, and because they feel satisfaction in knowing that their work will not be obstructed from use. Most of all, they do it because programming is fun. In addition, volunteers have written many useful programs for us. (Even technical writers have begun to volunteer. This confirms that programming is among the most fascinating of all fields, along with music and art. We don’t have to fear that no one will want to program. What Do Users Owe to Developers? There is a good reason for users of software to feel a moral obligation to contribute to its support. Developers of free software are contributing to the users’ activities, and it is both fair and in the long-term interest of the users to give them funds to continue. However, this does not apply to proprietary software developers, since obstructionism deserves a punishment rather than reward. We thus have a paradox: the developer of useful software is entitled to the support of the users, but any attempt to turn this moral obligation into a requirement destroys the basis for the obligation. A developer can either deserve a reward or demand it, but not both. I believe that an ethical developer faced with this paradox must act so as to deserve the reward, but should also entreat the users for voluntary donations. Eventually the users will learn to support developers without coercion, just as they have learned to support public radio and television stations.

What Is Software Productivity? If software were free, there would still be programmers, but perhaps fewer of them. Would this be bad for society? Not necessarily. Today the advanced nations have fewer farmers than in 1900, but we do not think this is bad for society, because the few deliver more food to the consumers than the many used to do. We call this improved productivity. Free software would require far fewer programmers to satisfy the demand, because of increased software productivity at all levels: • • • • Wider use of each program that is developed.

The ability to adapt existing programs for customization instead of starting from scratch. Better education of programmers. The elimination of duplicate development effort. Those who object to cooperation claiming it would result in the employment of fewer programmers are actually objecting to increased productivity. Yet these people usually accept the widely-held belief that the software industry needs increased productivity. How is this? “Software productivity” can mean two different things: the overall productivity of all software development, or the productivity of individual projects.

Overall productivity is what society would like to improve, and the most straightforward way to do this is to eliminate the artificial obstacles to cooperation which reduce it. But researchers who study the field of “software productivity” focus only on the second, limited, sense of the term, where improvement requires difficult technological advances. Is Competition Inevitable? Is it inevitable that people will try to compete, to surpass their rivals in society? Perhaps it is. But competition itself is not harmful; the harmful thing is combat. There are many ways to compete.

Competition can consist of trying to achieve ever more, to outdo what others have done. For example, in the old days, there was competition among programming wizards–competition for who could make the computer do the most amazing thing, or for who could make the shortest or fastest program for a given task. This kind of competition can benefit everyone, as long as the spirit of good sportsmanship is maintained. Constructive competition is enough competition to motivate people to great efforts. A number of people are competing to be the first to have visited all the countries on Earth; some even spend fortunes trying to do this.

But they do not bribe ship captains to strand their rivals on desert islands. They are content to let the best person win. Competition becomes combat when the competitors begin trying to impede each other instead of advancing themselves–when “Let the best person win” gives way to “Let me win, best or not. ” Proprietary software is harmful, not because it is a form of competition, but because it is a form of combat among the citizens of our society. Competition in business is not necessarily combat. For example, when two grocery stores compete, their entire effort is to improve their own operations, not to sabotage the rival.

But this does not demonstrate a special commitment to business ethics; rather, there is little scope for combat in this line of business short of physical violence. Not all areas of business share this characteristic. Withholding information that could help everyone advance is a form of combat. Business ideology does not prepare people to resist the temptation to combat the competition. Some forms of combat have been banned with anti-trust laws, truth in advertising laws, and so on, but rather than generalizing this to a principled rejection of combat in general, executives invent other forms of combat which are not specifically prohibited.

Society’s resources are squandered on the economic equivalent of factional civil war. “Why Don’t You Move to Russia? ” In the United States, any advocate of other than the most extreme form of laissezfaire selfishness has often heard this accusation. For example, it is leveled against the supporters of a national health care system, such as is found in all the other industrialized nations of the free world. It is leveled against the advocates of public support for the arts, also universal in advanced nations. The idea that citizens have any obligation to the public good is identified in America with Communism.

But how similar are these ideas? Communism as was practiced in the Soviet Union was a system of central control where all activity was regimented, supposedly for the common good, but actually for the sake of the members of the Communist party. And where copying equipment was closely guarded to prevent illegal copying. The American system of software copyright exercises central control over distribution of a program, and guards copying equipment with automatic copying-protection schemes to prevent illegal copying.

By contrast, I am working to build a system where people are free to decide their own actions; in particular, free to help their neighbors, and free to alter and improve the tools which they use in their daily lives. A system based on voluntary cooperation and on decentralization. Thus, if we are to judge views by their resemblance to Russian Communism, it is the software owners who are the Communists. The Question of Premises I make the assumption in this paper that a user of software is no less important than an author, or even an author’s employer.

In other words, their interests and needs have equal weight, when we decide which course of action is best. This premise is not universally accepted. Many maintain that an author’s employer is fundamentally more important than anyone else. They say, for example, that the purpose of having owners of software is to give the author’s employer the advantage he deserves–regardless of how this may affect the public. It is no use trying to prove or disprove these premises. Proof requires shared premises. So most of what I have to say is addressed only to those who share the premises I use, or at least are interested in what their consequences are.

For those who believe that the owners are more important than everyone else, this paper is simply irrelevant. But why would a large number of Americans accept a premise that elevates certain people in importance above everyone else? Partly because of the belief that this premise is part of the legal traditions of American society. Some people feel that doubting the premise means challenging the basis of society. It is important for these people to know that this premise is not part of our legal tradition. It never has been. Thus, the Constitution says that the purpose of copyright is to “promote the progress of science and the useful arts. ‘ The Supreme Court has elaborated on this, stating in `Fox Film vs. Doyal’ that “The sole interest of the United States and the primary object in conferring the [copyright] monopoly lie in the general benefits derived by the public from the labors of authors. ” We are not required to agree with the Constitution or the Supreme Court. (At one time, they both condoned slavery. ) So their positions do not disprove the owner supremacy premise. But I hope that the awareness that this is a radical right-wing assumption rather than a traditionally recognized one will weaken its appeal.

Conclusion We like to think that our society encourages helping your neighbor; but each time we reward someone for obstructionism, or admire them for the wealth they have gained in this way, we are sending the opposite message. Software hoarding is one form of our general willingness to disregard the welfare of society for personal gain. We can trace this disregard from Ronald Reagan to Jim Bakker, from Ivan Boesky to Exxon, from failing banks to failing schools. We can measure it with the size of the homeless population and the prison population.

The antisocial spirit feeds on itself, because the more we see that other people will not help us, the more it seems futile to help them. Thus society decays into a jungle. If we don’t want to live in a jungle, we must change our attitudes. We must start sending the message that a good citizen is one who cooperates when appropriate, not one who is successful at taking from others. I hope that the free software movement will contribute to this: at least in one area, we will replace the jungle with a more efficient system which encourages and runs on voluntary cooperation. Footnotes 1.

The word “free” in “free software” refers to freedom, not to price; the price paid for a copy of a free program may be zero, or small, or (rarely) quite large. 2. The issues of pollution and traffic congestion do not alter this conclusion. If we wish to make driving more expensive to discourage driving in general, it is disadvantageous to do this using toll booths, which contribute to both pollution and congestion. A tax on gasoline is much better. Likewise, a desire to enhance safety by limiting maximum speed is not relevant; a free-access road enhances the average speed by avoiding stops and delays, for any given speed limit. . One might regard a particular computer program as a harmful thing that should not be available at all, like the Lotus Marketplace database of personal information, which was withdrawn from sale due to public disapproval. Most of what I say does not apply to this case, but it makes little sense to argue for having an owner on the grounds that the owner will make the program less available. The owner will not make it completely unavailable, as one would wish in the case of a program whose use is considered destructive.

Categories
Free Essays

Measurement of Internal Consistency Software

Document analysis and fingerprint comparison are two of the most important tasks done by forensic experts in investigating a case. Documents and fingerprints related in a case make substantial evidences that can give progress to the investigation. With our ever advancing technology, new tools and equipments have been invented to help forensic experts in making these tasks easier for them. These tools, such as computer software, can give these people relevant information about certain documents, handwriting and fingerprint samples as evidences to a case they are examining.

One company who specializes in this kind of tools and software is the Limbic Systems, Inc. Limbic Systems’ technologies improve image-based identifications by way of advanced utilization of image intensity signals.1 Limbic Systems released several products that are used for fingerprint identification, handwriting and document analysis, and other forensic or security application products. One of those products is the Measurement of Internal Consistency Software or MICS.

The Measurement of Internal Consistency Software is an application that measures the intensity of the material (ink, for example) used and creates a three-dimensional model which can be likened to a topographic map complete with contour lines.2 This software had been developed by Limbic Systems, Inc. for 6 long years until the it was commercially released in 2003.

MICS features

1. Limbic Systems, Inc. (Forensic e-symposium). [Online] available from http://limbicsystems.forensic.e-symposium.com/it/index.html; accessed 25 Mar. 2006; Internet.

2. Emily J. Will. MICS Program Brings 3D Modeling and Mathematical Information to Handwriting Identification and Document Examination. [Online] available from http://www.qdewill.com/mics.htm; accessed 25 Mar. 2006; Internet.

Experts in handwriting identification very well know that handwriting is not just merely measured by its length and width, but it is also a three-dimensional product. The things visible to the human eye are just its length and width, but the third dimension is difficult to see, demonstrate or even quantify. But with the help of MICS, examiners can now easily visualize and measure the color density and other important aspects of handwriting and document examination.

MICS can examine scanned or digitally photographed images of documents and handwritten name. In Emily Will’s article3, she showed how MICS works in determining the density of her handwriting sample. Looking at the handwriting with the naked eye, it is just a simple handwriting done using a normal pen. But when it was placed through the thorough scanning features of the software, it revealed the density of the pen used. Other than that, it also showed a gap somewhere in the handwriting sample that means there was a moment when the pen was lifted off the paper while she was writing her name. One could never have thought of that without the use of the software.

Other than the gap, there are even more studies that can be done around the observation to gather more relevant information for the examiner. This kind of observation is definitely helpful for an examiner in identifying clues in an investigation. MICS makes it easier for them to closely examine different documents and handwriting samples in question.

Aside from handwriting and document analysis, MICS can also be used to identify and compare fingerprints. MICS is a platform where other application-specific products of Limbic Systems are based. And one of those applications and the first extension of MICS is the product called PrintIQ, which is a solution to identify fingerprints.

Just like how MICS works in documents, fingerprints are also identified and compare with another by measuring the intensity of the image between different points. MICS converts the fingerprint image into edge signals which are seen as the elevation in a three-dimensional surface map.

With all these features of the Measurement of Internal Consistency Software, it can definitely be an indispensable tool for examiners and investigators. The software can easily help them gather more relevant information with the documents and fingerprints that what can only be seen by our bare eyes. The results that MICS will provide can give them important clues that can possibly lead them to the progress of the case they are investigating.
3. Emily J. Will. MICS Program Brings 3D Modeling and Mathematical Information to Handwriting Identification and Document Examination. [Online] available from http://www.qdewill.com/mics.htm; accessed 25 Mar. 2006; Internet.

General Recommendation

Measurement of Internal Consistency Software or MICS is indeed a valuable “invention” by the Limbic Systems, Inc. It can prove to be a very useful tool for examiners and experts to help them perform their tasks much faster. However, as with other applications and tools, this software can be incorrectly utilized by the user. Thus, it is required that the user of the software understands the whole program – its theories, potentials, assumptions, and limitations. Knowing these things will give the user a more reliable output data. The company, Limbic Systems, Inc, has also been collaborating with current MICS users to formulate mathematical associations to be able to draw up more reliable conclusions based on the information provided by the software.

Bibliography

Limbic Systems, Inc. (Forensic e-symposium). [Online] available from http://limbicsystems.forensic.e-symposium.com/it/index.html; accessed 25 Mar. 2006; Internet.

Will, Emily J. MICS Program Brings 3D Modeling and Mathematical Information to Handwriting Identification and Document Examination. [Online] available ; accessed 25 Mar. 2006; Internet.

 

 

Categories
Free Essays

Management Information Systems Case Study

1) Problems with upgrades from Quick books to new accounting software package? How could they have avoided? These problems could have been avoided if when they made the initial decision for replacement of QuickBooks, they should have advised with a finance person before the change and or never made the change in the first place. Quick books was user friendly for the staff, and the newly implemented accounting system was more sophisticated and complicated accounting system than what everyone was used to. Nobody knew how to extract financial or operational data to make critical business decisions. Problems developing reusable reports were also a problem, this became too time consuming.

2) Why did SAP’s Business One prove to be a better choice for Wolf Peak than the new accounting software? Give Examples. SAP was designed specifically for Wolf Peak’s Business, and offered affordable promises and provided rapid return on investments, provided accurate up to the minute view of the business. SAP was a simple environment therefore the employees learned SAP Business One quickly and used it effectively. SAP’s Journey team came to the business to implement and demonstrate how the system worked. The benefits far outweighed the initial costs of original accounting software that was purchased after QuickBooks. XL Reporter is a program that comes with SAP Business One that lets the company builds custom reports that proved extremely helpful. Wolf Peak is now expanding SAP into the warehouse for inventory and management as well as CRM Customer Relationship Management. Overall SAP Business One is fulfilling and assisting all aspects of Wolf Peak’s business.

3) Should most SME’s use an integrated business software suite like Sap Business One instead of specialized accounting and other business software packages? Why or Why not? Reports that used to take months to create can now be created quickly by Business One. Business one creates an environment where the decision makers can get the information they want on a timely basis in a format they understand and can actually use. This program delivers useful information to make good solid business decisions for success. I believe that no individual brand or software is the superior. It is obvious that SAP Business One was a perfect match for Wolf Peak, but in the end, whatever works and proves success for the company’s employees and bottom line is the exact software match for the company. Overall it seems that an easy learning curve and information extraction is best for businesses.

Categories
Free Essays

Web Conferencing Programs Research Memo

In our meeting last week we discussed moving to different Web Conferencing software in an effort to become more user friendly to our remote users, and to enable cost savings in our telecom and IT infrastructure. I undertook the assignment to research the available software solutions and have found one that I believe will allow our company to achieve the objectives set forth during our meeting. During my research I came upon four different programs that I thought would meet our criteria. I will explain which one I believed was the superior choice and then explain what led me to that conclusion.

I have included a table listing the top four in the reference pageThe software that I believed was the best fit for us is Netviewer Meet 6. 0. My criteria that my decision was based on were: Features, Usability, Security, Support, Price and Trial availability. I will explain my choice based on two of them, features and price. Feature wise almost all of the four under consideration had, for the most part the same features between them.

The Online Meeting Tools Review (n. d) website indicated a that Netview Meet 6. had by far the best set of features that could be found in one program, based on the chart given on that websites page titled “Functions and Features of the 5 best web conferencing services”. Some of those features were that it allowed for desktop and selected application access as well as the ability to change presenters instantly and to be able to transfer mouse and keyboard control as well as a being user friendly with ”w ide-ranging options that can be hidden and revealed using the profile manager” (Online Meeting Tools Review website, n. ).

The price on the service was a key factor in leading to my decision as well. It has a monthly subscription fee of $49 per month which allows for 100 participants which was the best price per user from a cost standpoint. Also there is no need to purchase additional hardware or reconfigure firewalls and proxy settings as it supports most current configurations. By utilizing Netview Meet 6. 0 our department can enable more efficient remote collaboration thru more advanced web conferencing software.

Categories
Free Essays

Software Customization

Business software was fabricated to intentionally make the company processes much easier, efficient, accurate and more convenient for the users. Today, these software are now ready for modifications to suit the business needs of a particular company. Apparently, software customization is considered to be a social modification process (Clement) which affects many segments of people’s day to day activities

The customization of a business software is applicable only if it can justify the reasons for such modification. As examples, it would only be more ideal to customize a software if the company has found a more productive way of doing business or if the company is protecting any of its intellectual property rights for a particular product.

The main benefit of software customization in a newly discovered mode of production is that the software can minimize possible delays and errors in doing the said procedure because of the fact that its function will directly support the task. On the other hand, a customized software can also provide a form of security for intellectual product rights because the customized program will only be useful for a particular segment of the company’s production line.

Although software customization provides a larger scale of benefits, there are also some related concerns. For one, customizing a pre-defined program may require the company to invest in hiring an expert programmer to initiate the customization. Also, it is a necessary thing to train the in-house programmers for the customization so that the procedure can be maintained. These factors may all involve additional financial investments for the company. Moreover, customization also carries a certain amount of risks for it my not readily identify some erroneous procedures the software my induce in other unidentified company processes.

On a personal note, it seems that the most fundamental advancement in personal computing is the introduction of the internet. Previously, anything that needs to be done using a machine assistant was only confined in a limited area of computing. However, today’s capacity of the computers to transmit and receive data in split second speeds has allowed many individuals, organizations and industries to exchange information which primarily drives today’s social development. Basically, the advanced capability of PCs and the internet have definitely improved how business, education and communication are being implemented.

References

Clement, A. N.D. Customization of Software Systems. University of Limerick. Retrieved February 27, 2008 from

Categories
Free Essays

Software Developer

R N S INSTITUTE OF TECHNOLOGY CHANNASANDRA, BANGALORE – 61 UNIX SYSTEM PROGRAMMING NOTES FOR 6TH SEMESTER INFORMATION SCIENCE SUBJECT CODE: 06CS62 PREPARED BY RAJKUMAR Assistant Professor Department of Information Science DIVYA K 1RN09IS016 6th Semester Information Science and Engineering [email protected] com Text Books: 1 Terrence Chan: Unix System Programming Using C++, Prentice Hall India, 1999. 2 W. Richard Stevens, Stephen A.

Rago: Advanced Programming in the UNIX Environment, 2nd Edition, Pearson Education / PHI, 2005 Notes have been circulated on self risk nobody can be held responsible if anything is wrong or is improper information or insufficient information provided in it. Contents: UNIT 1, UNIT 2, UNIT 3, UNIT 4, UNIT 5, UNIT 6, UNIT 7 RNSIT UNIX SYSTEM PROGRAMMING NOTES UNIT 1 INTRODUCTION UNIX AND ANSI STANDARDS UNIX is a computer operating system originally developed in 1969 by a group of AT&T employees at Bell Labs, including Ken Thompson, Dennis Ritchie, Douglas McElroy and Joe Ossanna.

Today UNIX systems are split into various branches, developed over time by AT&T as well as various commercial vendors and non-profit organizations. The ANSI C Standard In 1989, American National Standard Institute (ANSI) proposed C programming language standard X3. 159-1989 to standardise the language constructs and libraries. This is termed as ANSI C standard. This attempt to unify the implementation of the C language supported on all computer system. The major differences between ANSI C and K&R C [Kernighan and Ritchie] are as follows: ? Function prototyping ? Support of the const and volatile data type qualifiers. Support wide characters and internationalization. ? Permit function pointers to be used without dereferencing. Function prototyping ANSI C adopts C++ function prototype technique where function definition and declaration include function names, arguments’ data types, and return value data types. This enables ANSI C compilers to check for function calls in user programs that pass invalid number of arguments or incompatible arguments’ data type. These fix a major weakness of K&R C compilers: invalid function calls in user programs often pass compilation but cause programs to crash when they are executed.

Eg: unsigned long foo(char * fmt, double data) { /*body of foo*/ } unsigned long foo(char * fmt, double data); eg: int printf(const char* fmt,……….. ); External declaration of this function foo is specify variable number of arguments Support of the const and volatile data type qualifiers. ? The const keyword declares that some data cannot be changed. Eg: int printf(const char* fmt,……….. ); Declares a fmt argument that is of a const char * data type, meaning that the function printf cannot modify data in any character array that is passed as an actual argument value to fmt.

Volatile keyword specifies that the values of some variables may change asynchronously, giving an hint to the compiler’s optimization algorithm not to remove any “redundant” statements that involve “volatile” objects. char get_io() { volatile char* io_port = 0x7777; char ch = *io_port; /*read first byte of data*/ ch = *io_port; /*read second byte of data*/ } ? eg: If io_port variable is not declared to be volatile when the program is compiled, the compiler may eliminate second ch = *io_port statement, as it is considered redundant with respect to the previous statement. Prepared By: RAJKUMAR [Asst. Prof. ] & DIVYA K [1RN09IS016] Page 1 RNSIT UNIX SYSTEM PROGRAMMING NOTES The const and volatile data type qualifiers are also supported in C++. Support wide characters and internationalisation ? ? ANSI C supports internationalisation by allowing C-program to use wide characters. Wide characters use more than one byte of storage per character. ANSI C defines the setlocale function, which allows users to specify the format of date, monetary and real number representations. For eg: most countries display the date in dd/mm/yyyy format whereas US displays it in mm/dd/yyyy format. Function prototype of setlocale function is: ? #include char setlocale (int category, const char* locale); ? The setlocale function prototype and possible values of the category argument are declared in the header. The category values specify what format class(es) is to be changed. Some of the possible values of the category argument are: category value effect on standard C functions/macros LC_CTYPE LC_TIME LC_NUMERIC LC_MONETARY LC_ALL ? ? ? ? ? Affects behavior of the macros Affects date and time format. Affects number representation format Affects monetary values format combines the affect of all above Permit function pointers without dereferencing ANSI C specifies that a function pointer may be used like a function name.

No referencing is needed when calling a function whose address is contained in the pointer. For Example, the following statement given below defines a function pointer funptr, which contains the address of the function foo. extern void foo(double xyz,const int *ptr); void (*funptr)(double,const int *)=foo; The function foo may be invoked by either directly calling foo or via the funptr. foo(12. 78,”Hello world”); funptr(12. 78,”Hello world”); K&R C requires funptr be dereferenced to call foo. (* funptr) (13. 48,”Hello usp”); ANSI C also defines a set of C processor(cpp) symbols, which may be used in user programs.

These symbols are assigned actual values at compilation time. cpp SYMBOL USE _STDC_ Feature test macro. Value is 1 if a compiler is ANSI C, 0 otherwise _LINE_ Evaluated to the physical line number of a source file. _FILE_ Value is the file name of a module that contains this symbol. _DATE_ Value is the date that a module containing this symbol is compiled. _TIME_ value is the time that a module containing this symbol is compiled. The following test_ansi_c. c program illustrates the use of these symbols: #include int main() { #if _STDC_==0 printf(“cc is not ANSI C compliant”); #else printf(“%s compiled at %s:%s.

This statement is at line %d
”, _FILE_ , _DATE_ , _TIME_ , _LINE_ ); #endif Return 0; } Prepared By: RAJKUMAR [Asst. Prof. ] & DIVYA K [1RN09IS016] Page 2 RNSIT ? UNIX SYSTEM PROGRAMMING NOTES Finally, ANSI C defines a set of standard library function & associated headers. These headers are the subset of the C libraries available on most system that implement K&R C. The ANSI/ISO C++ Standard These compilers support C++ classes, derived classes, virtual functions, operator overloading. Furthermore, they should also support template classes, template functions, exception handling and the iostream library classes.

Differences between ANSI C and C++ ANSI C Uses K&R C default function declaration for any functions that are referred before their declaration in the program. int foo(); ANSI C treats this as old C function declaration & interprets it as declared in following manner. int foo(…….. ); ? meaning that foo may be called with any number of arguments. Does not employ type_safe linkage technique and does not catch user errors. C++ Requires that all functions must be declared / defined before they can be referenced. int foo(); C++ treats this as int foo(void); Meaning that foo may not arguments. accept any

Encrypts external function names for type_safe linkage. Thus reports any user errors. The POSIX standards ? POSIX or “Portable Operating System Interface” is the name of a family of related standards specified by the IEEE to define the application-programming interface (API), along with shell and utilities interface for the software compatible with variants of the UNIX operating system. Because many versions of UNIX exist today and each of them provides its own set of API functions, it is difficult for system developers to create applications that can be easily ported to different versions of UNIX.

Some of the subgroups of POSIX are POSIX. 1, POSIX. 1b & POSIX. 1c are concerned with the development of set of standards for system developers. POSIX. 1 ? This committee proposes a standard for a base operating system API; this standard specifies APIs for the manipulating of files and processes. ? It is formally known as IEEE standard 1003. 1-1990 and it was also adopted by the ISO as the international standard ISO/IEC 9945:1:1990. POSIX. 1b ? This committee proposes a set of standard APIs for a real time OS interface; these include IPC (interprocess communication). ? This standard is formally known as IEEE standard 1003. -1993. POSIX. 1c ? This standard specifies multi-threaded programming interface. This is the newest POSIX standard. ? These standards are proposed for a generic OS that is not necessarily be UNIX system. ? E. g. : VMS from Digital Equipment Corporation, OS/2 from IBM, & Windows NT from Microsoft Corporation are POSIX-compliant, yet they are not UNIX systems. ? To ensure a user program conforms to POSIX. 1 standard, the user should either define the manifested constant _POSIX_SOURCE at the beginning of each source module of the program (before inclusion of any header) as; #define _POSIX_SOURCE ? ? ? ? Or specify the -D_POSIX_SOURCE option to a C++ compiler (CC) in a compilation; % CC -D_POSIX_SOURCE *. C ? POSIX. 1b defines different manifested constant to check conformance of user program to that standard. The new macro is _POSIX_C_SOURCE and its value indicates POSIX version to which a user program conforms. Its value can be: Prepared By: RAJKUMAR [Asst. Prof. ] & DIVYA K [1RN09IS016] Page 3 RNSIT _POSIX_C_SOURCE VALUES 198808L 199009L 199309L MEANING UNIX SYSTEM PROGRAMMING NOTES First version of POSIX. compliance Second version of POSIX. 1 compliance POSIX. 1 and POSIX. 1b compliance ? _POSIX_C_SOURCE may be used in place of _POSIX_SOURCE. However, some systems that support POSIX. 1 only may not accept the _POSIX_C_SOURCE definition. ? There is also a _POSIX_VERSION constant defined in header. It contains the POSIX version to which the system conforms. Program to check and display _POSIX_VERSION constant of the system on which it is run #define _POSIX_SOURCE #define _POSIX_C_SOURCE 199309L #include #include int main() { #ifdef _POSIX_VERSION cout

Categories
Free Essays

Questions on Computer Basics and Software

No. of Printed Pages : 4 BACHELOR IN COMPUTER APPLICATIONS (BCA Revised) Term-End Examination cV 00 June, 2012 BCS-011 BCS-011 : COMPUTER BASICS AND P C SOFTWARE Time : 3 hours Maximum Marks : 100 Weightage : 75% Note : Question number 1 is compulsory and carries 40 marks. Attempt any three questions from the rest. (a) Convert the following hexadecimal number to equivalent binary and decimal : (i) (ii) (b) (51)16 (DA)16 5 4 1. How is the access time on a disk is defined ? Explain each of the component of access time with the help of an example. Explain the basic structure of a computer system ? With the help of a diagram .

A personal computer has a component called motherboard. How is motherboard related to the basic computer structure ? (c) 6 BCS-011 1 P. T. O. List five facilities that are provided by an operating system to a user or to a program. Draw a flow chart to add integer between 2 to (n+1) where n>2. Explain the terms : Subroutine and function with the help of an example. Consider two IP addresses 160. 10. 11. 25 160. 10. 12. 35 Do they belong to the same network , if (i) The subnet mask is 255. 255. 0. 0 (ii) The subnet mask is 255. 255. 255. 0 Justify your answer. (h) What is a Wide Area Network (WAN) ? What are the characteristics of WAN ?

How are they different from LANs ? Is Internet a WAN ? Justify your answer. What is the need of memory hierarchy in a computer system ? Explain with the help of various trade offs like cost, speed, size etc. What is perverse software ? List various types of perverse software. Give four ways to counter perverse software. What are cookies in the context of Browser software ? Are cookies bad ? Explain. List four precautions for safe browsing. BCS-011 2 7 6 8 6 3. (a) Compare and contrast the characteristics of the following : (i) (ii) (b) Dot matrix printer versus Laser printer Cathode ray tube monitors versus liquid crystal display monitors. (c) “Latest word processor have text 8 manipulation functions that extend beyond a basic ability to enter and change text ” . Explain any four of these advanced text manipulation functions. 6 Explain the characteristics of the following data transmission channels : (i) (ii) Optic fiber cables Radio waves (iii) Infrared 4. (a) List six activities that should be part of an e-learning system. Explain the phases of content development in e-learning. (b) Compare and contrast the following : (i) (ii) SRAM versus DRAM SIMM versus DIMM 6 8 (iii) ROM versus PROM (iv) CD-ROM versus Pen – drive. c) What is Open Source Software ? What are the main features of open source development model ? BCS-011 3 6 P. T. O. 5. Explain any five of the following with the help of an example/diagram, if needed. (i) (ii) The uses of WIKI in collaboration. The activities/actions performed by a search engine. 20 (iii) TCP/IP model. (iv) (v) (vi) Activities in a project management software. Batch systems and time sharing operating systems. Different types of parts in a computer. (vii) Concept of Instruction ; and motivation for development of UNICODE. BCS-011 4

Categories
Free Essays

Determining Operating Systems and Software Applications

Determining Operating Systems and Software Applications BIS/320 Amazon has made a business of selling a variety media types while also making the reselling of the same media an attractive option. What better way to regain in part what you spent on media interests than to resell it and have money to put towards the next interest. As of 2004 Amazon began running the Linux operating system across the board. Amazon then became one of the largest and well known companies running the Linux operating system.

As one of the largest ecommerce centered businesses with a large global customer base with high expectations of constant expansion. Currently, it is known that Amazon is running Linux servers “Amazon’s Elastic Compute Cloud (EC2), had close to half-a-million servers already running on a Red Hat Linux variant (Vaughn, 2012). ” At this time “Amazon has never officially said what it’s running as EC2’s base operating system, it’s generally accepted that it’s a customized version of Red Hat Enterprise Linux (RHEL). (Vaughn, 2012)” In addition Amazon uses Xen hypervisor as host to the Linux system for virtual machines.

Solaris; OpenSolaris; FreeBSD and NetBSD and Windows 2003 and 2008 are additional virtual machine instances. The multiple operating services that Amazon is currently using assist with meeting the high demand of users that browse and purchase from their sites. In using their cloud technology, EC2, it is also possible that not all information will be stored at any specific location, but is easily accessible to anyone within the company to access it. With Linux gaining popularity this will ultimately become beneficial to Amazon in their continual global expansion goals.

Hardware are electrical mechanisms that is physically connected to your computer such as an electronic components and related gadgetry that input, process, output, and store data according to instructions encoded in computer programs or software (Kroenke, 2012). The Amazon-to-buyer operating system is quite simple and uses a variety of input and output in comparison with various office based business. A difference will be the amount that is actually used verses an output or input device. The individual consumer at home using their computer will initiate the process by registering as a user , followed by inputting heir shipping and billing information which will be stored by the website’s servers. The consumer’s computer is considered the input device and the server is a storage device. Once a purchase has occurred the website will use the stored information to input the customer’s credit card information into a card reader which automatically debits the funds from the customer’s account. Card readers and scanners are widely used input devices (Kroenke, 2012). Most output devices are located at various individual merchants that use Amazon to sell their goods. Each having a database that show pending orders inputted through Amazon.

These merchants will use their printers to document the order and locate the desired merchandise. Once the merchandise is located, information is then sent to the shipping department. Versatile shipping options like UPS, Fed Ex, or the U. S. Postal service are available and output devices will print things such as the bill of lading; the inventory of the packaged goods and the shipping labels with the previously entered customer’s shipping information on it. Once delivered, the merchandise is scanned via another input device called a barcode scanner.

This information is then relayed to the merchant who reports a successful delivery to Amazon. A confirmation email will be sent to the customer confirming their transaction is complete. If desired the consumer can give their input on the Amazon experience via their home based computer. Amazon’s Founder and Chiefy Executive Office outlines the companies business objectives as: Increase Sales, promote the brand, create a loyal customer base and fiscal strength. By expanding each operational goal its gives a better understand on how the operating systems contribute to Amazon’s objective.

Sales can be defined as making sure the customer gets what he wants, but also feeding in to the psychology of impulse buying. Impulse purchases can be promoted through an application Amazon employs, called the Dash. When conducting a search for a particular item the results of that search offer not only the item itself, but also similar items. There is also a feature that shows the customer what other customers, who have order this particular item of interest, have also purchased. Promotional brand occurred during Amazon’s Kindle was launched.

In 2005 Bezos believed that “every book ever written in any language will be available (to the enduser) in less than sixty seconds”. (Bezos, 2009). The edict issued that the demarcation between Kindle, the device and Kindle the service be seamless to the enduser. In the four years that followed, sales have exceeded budgetary expectations. The e-mail feedback from customers is strongly positive with 26% of customer e-mails containing the word “love”. Amazon has positioned itself prominently on search engine sites so a pattern match of only a few letters will bring Amazon to the forefront.

Amazon itself has become a search engine of sorts, which many people use for pricing items being considered for purchase. The brand has made Amazon not only a shopping site, but also a reference guide for benchmarking other purchases. Bezos defines customer loyalty as encouraging his staff to be “obsessed over our customers”. The computer applications used for tracking purchases as well as shipping allows customer service representatives to assist dissatisfied customers and get them to a satisfactory result. References Kroenke, D. M. (2012). MIS Essentials (2nd ed. ). : Pearson Education Thorp J. Feb 99), The Information Paradox, Retrieved from http://www. amazon. com/Information-Paradox-Realizing-Business-Technology/dp Vaughn, S. (2012, March 16). Amazon’s EC2 cloud is made up of almost half-a-million servers. ZDNet. Retrieved from http://www. zdnet. com/blog/open-source/amazon-ec2-cloud-is-made-up-of-almost-half-a-million-linux-servers/10620 | Operating Systems| Horizontal-Market Applications| Vertical-Market Applications| One-of-a-Kind Market Applications| Example| Linnux, Eucalyptus(cloud), OpenStack(cloud),EC2 and Red Hat Linux… for starters| | | | Description of how it is used| | | | |

Typical user| Amazon draws its users from anyone that can operate a computer and has an internet connection. | | | | Advantages| Easy to use; large amounts of information can be accessed without incorporating mass amounts of storage on a single server with cloud technology; accessibility to data from any location with cloud technology. | | | | Disadvantages| Even though Amazon continues to hire developers bandwidth is still and issue. People lose data. With such a broad base of people with the ability to browse and purchase products it poses a security issue regarding

Categories
Free Essays

Should Marty’s Company Embrace Open-Source Software?

ZAOZAO LIU MIS500 FALL 2012 Should Marty’s company embrace open-source software in its hit product? Marty Dirwey, CEO of Kalley Music Software, is facing a crucial question that whether she should open Amp Up’s source code to users and developers. Undoubtedly, the new strategy which opens source software in KMS’s hit product challenges the current highly successful strategy which prioritizes holding the intellectual property of Amp Up. However, if I were Marty, I would support the new strategy. There are four parts in this paper. 1) analyze a basic but essential issue that why Marty hesitates opening Amp Up’s source code; (2) further explain the reasons why the company should accept the open-source strategy; (3)give some recommendations to KMS; (4) draw the conclusion. The reasons why Marty hesitates opening Amp Up’s source code Essentially, there are three things Marty is worrying about: the feelings of the team, the churn of the customers, and the profit of the company. As we can see from the case, Marty is in a dilemma.

She resists opening the source code, because she is unwilling to give up the source code which is the fruit of painstaking labor of the whole team and she is worrying about how to make money if the company shared the source code of the software which currently is the main source of their revenue. However, on the other hand, if she won’t open the source code, undeniably, she is likely to be seen as the enemy of the users, maybe not the enemy of all users, but at least the enemy of the fanatics, which must lead to be alienated by the customers who play a significant role in the music game field. ZAOZAO LIU MIS500 FALL 2012 The reasons why the company should accept the open-source strategy Based on what’s Marty worried I mentioned above, I will explain the reasons why the company should accept the open-source strategy from three perspectives. Considering the feelings of the team, especially the feelings of programmers, I believe the programmers must cheer for open-source software.

Obvious is that confronting the current situation that inventing and executing dazzling upgrades are becoming harder and harder, programmers who Marty really cares about are fatigue so that they lose the passion in the software so that exhaust the creativity. At the moment, open source is a savior for all the programmers. They can integrate ideas from different developers and based on the basic innovative idea from developers, the programmers are more likely to create more stable and valuable upgrades than their opponents, because the programmers, the parents of Amp Up, must be more familiar with every detail of code.

Another fact we should recognize is that a new generation of programmers has grown up with open source software, and is more skilled in finding out what they need with OSS than with closed and proprietary tools and systems. What mean by this is that with the open-source software, the programmers would work more effectively and efficiently. As to the customer churn, opening Amp Up’s source code to the external developers doesn’t necessarily lead to the customer churn, while closing the source code doesn’t mean that similar and better software would never show up and the customers would be loyal to the company forever.

Actually, infringers with strong competences have already shown up. Thus 2 ZAOZAO LIU MIS500 FALL 2012 open source becomes a must-to-do thing. From my perspective, as long as the programmer team of Amp Up doesn’t give up the innovation of the software, the opponents can hardly take away the original customers. There are two reasons. One reason is that Amp Up has sound brand which has been generally accepted. In my opinion, the code of software is similar to the literature.

Famous literature must be recreated several times, but the readers usually only can remember the original writer and prefer the original work. Thus, Marty doesn’t need to worry about the KMG’s position in the music game will be challenged easily, leading to a large number of customer loss. The other reason is the team of Amp Up, including the programmers, CEO, COO, is professional and visionary and more familiar with the software and mass market so that the team is more likely to have a better understanding of the customers’ preference and cater to the needs of the market.

The strategy of open-source software in KMS’s hit product has a positive impact on enforcing the business transformation from a technology-oriented company to a serviceoriented company, which can bring KMS more opportunity to get more profit. If KMS wouldn’t give away their proprietary IP and open the source, to keep technology advantages in the music game field, they must invest more money in Amp Up, such as maintenance fee of hardware, so the downward tendency of KMS’s profit would be inevitable. Recommendations for KMS In short-term run, KMS should open the source and then integrate and utilize the ideas from different developers to mprove Amp Up quality and get potential customers to the 3 ZAOZAO LIU MIS500 FALL 2012 maximum extent. That is, KMS should utilize Amp Up to capture the last bucket of gold of the music software. After that, KMS should open the platform to third-party companies and provide technical support to those companies which still have the dream of surviving or even thriving in an increasingly competitive music game field. In long-term run, a business transformation of KMS is a must. Besides, I think KMS should still prioritize the innovation, because it has a potent technology team. However, the model of technology innovation should be changed.

Innovation within ecosystem should be a long-term direction. Conclusion KMS should open the source, because Amp Up has already in the open-source community and open source software can bring more potential customers and more profit to KMS. Reference Coyle, Karen. “Open Source, Open Standards. ” Information Technology and Libraries 21. 1 (2002): 33-6. ABI/INFORM Complete. Web. 18 Sep. 2012. Jonathan Schwartz. ”Should Marty’s Should Marty’s company embrace open-source software in its hit product? ” Simon, Phil. “Next Wave of Technologies” opportunities in chaos (03/01/2010) , Chapter 5 4

Categories
Free Essays

Case: Supply Chain Management and Application Software Packages

Info from case total revenue for last reporting = 110 million cio reviewed 3 following implementation strategies: -classic disintermediation – removal of intermediaries in a supply chain. connects supplier directly with customers -remediation-working more closely with ecisting middlemen partners. strategy could be affected by high contracting risks. -network-building alliances and partnerships with both existing and new suppliers and distributors involving a complex set of relationships. Networks tended to reduce search costs for obtaining information, products and services. selected remidiation – because it best fits the firms goal of simplifying data sharing throughout the supply chain -also had longterm and positive relationship with its primary distributors, which would ameliorate the high contracting risk. “The firm purchased stock woods from a number of producers and processed them to meet specific customer specifications. Approx. 60 percent of woodsynergy sales were in high-end furniture” Problems 1 – Choice of implementation plan is wrong – LONG TERM -CIO chose remediation because it best fit the firm’s goal of simplifying data sharing throughout the supply chain; furthermore, the CIO noted that woodsynergy had a long-term and positive relationship with its primary distributors which would ameliorate the high contracting risk issue” -the best way of simplifying data sharing is eliminating any unnecessary party that the information needs to travel to. -remove the distributors and engage the customers directly -who are we to decide how your existing distributors will feel after you amend any contracts to include any new information system to the SCM that ultimately creates more overhead for them? the business model of woodsynergy suggests that “the firm was committed to delivering information to the right people at the right time so that strategic and operational decisions were made properly and quickly” -benefit going national prevented by local distributors – if woodsynergy engages their end users directly it will promote better customer relationships as well as open potential national and international markets/ Causes -long-term relationships with distributors -contracts with distributors -CIO decision seems biased Alternatives choose classic intermediation -stay with remediation -choose networking Solution: Chose Classic intermediation •Removes the middleman •The middle man share shift to suppliers, Woodsynergy and to the customer, making the company more profitable and increasing the customer loyalty •Efficiency – instead of suppliers shipping first to the Woodsynergy and then Woodsynergy shipping the products to the customer, supplier can ship straight to the customer Implementation: (implanting the plan – find the need, develop the program, and implement it and the evaluate it) Business need •System investigation •System analysis •System design •Proframming and testing •Implementation •Operation and mainenance 2 – Prototype Built – short term problem *** -“due to budget and time constraints the project team chose to build a gateway prototype without addressing problems of integrity and timeliness with the systems data. The project team decided to improve the data quality at a future date” – customers data needs to be secure. Period. For any duration no matter how short. “Two of the key drivers included in gateway design were data standardization and real-time interface” -It should be real-time interface and data integrity as aligned with Woodsynergy’s business goals. -release data standardization at a later time instead of data integrity Causes -budget -time constraint -phase 1 of prototype does not directly correlate to business goals Alternatives -cloud system from 3rd party -key drivers in phase 1 = data integrity and real-time interface/data standardization at future date/release •Application software packages – off the shelves. ONE MORE alt Solution: •Application software packages – off the shelves. oPrewritten, pre-coded application software commercially available for sale oA lot of choices, with rating/reviews from its customers/users oOther companies are already using them oSome software companies even let you try them oQuicker solution, gives the it team to work on the bigger problem or new software oIt may be cheaper than labour and resources spent building prototype that may put company`s customer`s information at risk Implementation – . Identify potential vendors 2. Determine the evaluation criteria a. Functionality of the software b. Cost and financial terms c. Vendor`s reputation – success stories/customer reviews d. System flexibility e. Security f. Required training g. Data handling h. Ease of internet interface i. User friendly 3. Evaluate vendors and packages 4. Chose vendor and package 5. Negotiate a contract 6. Implement the software 7. Train the staff/users 3 – Project Team Questionable – Short term and Long term? *** Causes launched multiple it based supply chain management initiatives -researched how gateways are used in their business and understand the different of technology on the internet” in first few weeks – this should take a few days at most -phase 1 of prototype not aligned with business goals –decision criteria— this is what I think would be the criteria, we can discuss if you have others *** -budget – need better coaching on team goal and better planning -increase customer satisfaction -be consistent with corporate mission -Time constraint – implement fairly quickly -improve profits within acceptable risk parameters Solution – BE consistent with corporate mission Implementation •Be consistent with corporate mission oTrain and remind them in every morning huddles oBefore implementing the any new plan or developing new software or making the decision to devolve a new software, correlate it with the business strategy oDelegate effectively to team members oHold them accountable – stay on top of their performance oGive the team budget – quarterly yearly or project based – so there will not be any wastages Source: /http://plato. acadiau. ca/courses/Busi/IntroBus/CASEMETHOD. html/

Categories
Free Essays

Software Associates

Assignment 1: Variance Analysis Report In order to perform a variance analysis report Jenkins calculated the actual revenues and expenses and found the difference which was $296,610 in profits. Then Jenkins did the same with budgeted values and found the budgeted profits to be $606,350. The variance amount in turn is $309,960 under budget. Also, the variance amount for revenues is $32,100. This number is favorable due to the fact that they made more than what they had budgeted for. But on the contrary, the variance amount for expenses was $342,060, which was unfavorable because they spent far more than what they had budgeted for.

This information would not be sufficient in order to explain to Norton why their profit percentage is nearly half of what they budgeted. This variance analysis report only shows the raw numbers and not any details to why they spent more on expenses than what they budgeted. Jenkins would have a difficult time explaining details to why they went over budget. She would need to show him a detailed expense report of the budgeted items and the actual amount they spent on the items. Then she would have to clearly define which items went over budget and why.

This variance analysis report would not help Jenkins in the 8 am meeting she has would need to provide more information. Assignment 2: Preparing the Budget: Variance Analysis Report In order to provide more information to Norton, Jenkins will need to perform a variance analysis report. Jenkins would be required to use the numbers provided in Exhibit 2. She will use the numbers on the budget and actual income statement to identify revenue quantity, which is provided in number of hours. She will then identify actual and expected quantity.

The actual number of consultant hours exceeded the expected number of consultant hours. Then Jenkins subtracted the actual amount of hours from the expected amount of hours and then multiplied by the expected labor price of $90. Jenkins found that Software Associates made a total of $278,100 when providing the extra amount of hours billed. This is favorable for Software Associates if the billing rate was $90 as expected; however the average rate per consultant amounted to $83. 69. Next, Jenkins determined the average billing rate variance by subtracting the actual price from the expected price.

She then multiplied the difference in price and the quantity of work done. Jenkins found that they had a deficit of $246,090. This is unfavorable because Software Associates is losing money due to the actual rate drop from $90 to $83. 69. When Jenkins compared the variance of both quantity of hours and hourly rate, this gave her the total revenue variance of $32,100. The total revenue variance is also the difference between the actual revenue and expected revenue. Over all, it is favorable that Software Associates created more revenue.

Jenkins then determined whether or not the additional revenue would cover the additional costs incurred for the excess consultants. Jenkins used the same method for consultant expenses. By subtracting the actual number of hours supplied (50,850) from the budgeted number of hours supplied (47,250) and multiplying the expected costs, $37, Jenkins found a cost of $133,200. $133,200 is the amount they paid over the expected cost due to the increase in actual labor. Next, Jenkins took the actual cost of $39. 90 and subtracted the expected cost of $37 then multiplied the actual amount of labor hours, 50,850.

This amounted to $147,465. This is the extra amount Software Associates paid due to the labor cost change. The two numbers, $133,200 and $147,465, equal $280,800. The difference in consultant salaries cost from actual to expect cost is $280,800. Overall operating expense is broken down into two categories, actual and expected. Subtract the actual operating expense, $938,560, from the expected operating expense of $877,300 to get the variance of $61,260. This amount is unfavorable. Jenkins found the total expense variance by completing the same equation.

She subtracted the expected total expense from the actual total expense. The total expense variance was found to be $342,060. The extra hours worked created more costs than the extra revenue acquired. This puts the company in an awful position. The budget was not planned out very well. The price of the billed labor decreased while more labor was done and less was billed for. This is an equation for disaster as you can see. More planning must be taken when figuring out a budget and Software Associates must stick strictly to the budget for reasons like this. Numbers can add up quickly.

Assignment 3: Expense Analysis: Spending and Volume Variance Analysis of Operating Expenses Jenkins then needed to analyze the expense analysis. Many of the expenses for Software Associates were not entirely fixed costs or variable costs. Rather, many of the expenses were a combination of fixed and variable costs. Therefore, Jenkins evaluated the overhead of the company and prepared Exhibit 3, which shows her judgment about each expenses degree of variability. Due to the increased expenses per consultant, it is also important to study how costs change with the additional consultant.

In order to examine the relationship of overhead costs and number of consultants, Jenkins found the amount of the budget, which was deemed variable, and which was deemed fixed. The budgeted variable amount was obtained by multiplying each expense’s budgeted amount by the percent in which was expected to be variable. Then, she subtracted the budgeted amount from the budgeted variable amount to find the budgeted fixed amount. These calculations are shown in Exhibit 3A. Next, Jenkins took numbers and calculated the spending variance and volume variance.

In order to perform a spending variance, she subtracted the actual amount spent from the budgeted amount. In this case the actual amount spent was $938,560 and the forecasted expenses totaled $877,300. After subtracting those numbers she found that the spending variance was $61,260. This is an unfavorable outcome of the quarter and can be mostly attributable to the eight extra consultants that were hired. The volume variance is determined by subtracting the budgeted quantity from the actual quantity and then multiplying the cost per unit.

In this case, the expected number of consultants was 105 but the actual number of consultants was 113. To determine the cost per consultant, she took the total variable cost [$525,000] and divided it by the actual number of consultants [113] and got $4,646. Therefore by multiplying $4,646 by 8 Jenkins found the volume variance of $37,168. This is unfavorable and when compared to the spending variance, she determined that one of the major faults in Software Associate’s expenditures for the quarter was hiring the extra eight consultants which were not budgeted for.

Assignment 4: Billing Percentage: Analysis of Revenue Change After analyzing the expense analysis, Jenkins wanted to understand why the actual number of consultants was nearly 8% higher than the budgeted amount when revenues only had increased by 1%. Jenkins knew if she viewed the budgeted amount of hours allocated for consultants versus the actual hours spent towards consultants she would be able to determine if the consultants were being less productive. First Jenkins viewed the billing percentage by analyzing how much the consultants were billed for versus how much they were expected to be billed for.

The consultants were billed for 39,000 hours when they supplied 50,850 hours creating an actual billing percentage of 76. 7%. The budget, however, projected to bill for 35,910 hours when actually supplied 47,250 hours creating a 76% billing percentage. Jenkins noticed there was a difference of 3,600 hours that were billed and supplied for which was not allocated in the budget. Each of these numbers was found by Jenkins referring to Exhibit 4. Jenkins also noticed that the average billing rate per consultant decreased from $90 to $83. 69.

Overall Jenkins saw that if she took the actual hours supplied [50,850 hours] and multiplied it by the actual billing percentage [76. 7%] and then multiplied that by the actual cost per consultant [$83. 69] that there was an actual cost of $3,264,073. 1955 spent towards her consultants. Jenkins also noticed that when she recreated this same equation but in retrospect of Software Associates budgeted amount she found that they were only budgeted to spend $3,231,900. 00 on consultants. This was found by taking the budgeted hours supplied [47,250 hours] and multiplying it by the actual billing percentage [76. %] and then multiplying that by the actual cost per consultant [$90. 0]. (Each of these numbers was found by Jenkins referring to Exhibit 4. ) After analyzing the actual amount versus the budgeted amount of money Software Associates allocated towards consultants, Jenkins noticed there was a $32,173. 1955 increase in spending this quarter. Jenkins noticed that the billing percentage increased and the rate per consultant decreased. Based on the increase of consultants allocated and the increase in salary and fringes per consultant, Jenkins realized she is paying more for consulting.

Their work does not appear to be more productive in the grand scheme of things. Software Associates are paying a lot more money for more consultants and not receiving a high enough overall revenue increase. Jenkins further analyzed Software Associate’s spending towards their increase in consultants by directing her attention towards the increase in hours supplied by the consultants [3,600 hours= 50,850-47,250] and multiplied that by the expected billing percentage [76%] and multiplied that by the expected rate per consultant hour [$90] and there was a variance of $246,240. 0. $246,240. 00 defines the amount that would have been spent per consultant. This is an unfavorable outcome for Software Associates because they are spending a considerable amount of money and not receiving a high return on investment per consultant. The quantity of work is not benefiting the company enough to spend more money on maintaining that number of consultants.

Categories
Free Essays

Software Engineering

SOFTWARE ENGINEERING PROJECT – I INTRODUCTION: The goal of this paper is to analyze about three major software projects namely • The London Ambulance System • The Virtual Case File • The Automatic Baggage System By analyzing these software projects and the software engineering principles followed, the key factors responsible for the software projects failure can be understood. Each of these projects has failed miserable as they didn’t follow proper software engineering principles. In this term paper the following projects have been studied and reason for their failures are identified.

Finally there is a comparison off all the three software projects studied. The methodology followed in writing this term paper is reading the following reference materials available in the internet and extracting the key points for the failures of the software projects. The papers referenced for writing the following term paper are 1. H. Goldstein. Who Killed the Virtual Case File? IEEE Spectrum, Sept. 2005, pp. 24–35. 2. Statement of Glenn A. Fine, Inspector General, US Dept. of Justice, 27 July 2005. 3. A.

Finkelstein and J. Dowell. A Comedy of Errors: the London Ambulance Service Case Study. 4. Report of the Inquiry into the London Ambulance Service (February 1993), by A. Finkelstein, 5. Richard de Neufville. “The Baggage System at Denver: Prospects and Lessons,” Journal of Air 6. Barry Shore. “Systematic Biases and Culture in Project Failures,” Project Management Journal CONCLUSION: The conclusion after studying these three papers, for any software projects the good principles of software engineering should be followed. The software development process should be properly planned with achievable and realistic deadlines. All the three projects had poor planning with unrealistic deadlines. • Great importance should be given to the requirements gathering phase and it should not be changed during the middle of the development • Developers should develop the projects with proper coding standards so that there is no issue during the integration of different modules. • Time critical projects should require critical and solid reasoning as well as good anticipation of problems and perform risk management. The schedule of the software projects should have good portion of time in testing the software product developed. • Finally, as far as possible keep the complexity of the system to manageable levels and tested effectively. LONDON AMBULANCE SYSTEM In October 1992 the Computer Aided Despatch (CAD) system developed by Systems Options was deployed for the London Ambulance System (LAS). The goal of the software system was to automate the process of the ambulance service for the London Ambulance System (LAS) in the city of London, United Kingdom.

The implemented project was a major failure due to variety of factors. The Each component of good state of the art has been ignored, each guideline of the Software engineering has ignored by the management and authorities’ neglected basic management principles. The working of the LAS can be summarized as: the system gets request by phone calls and sends ambulance based on nature, availability of resources. The automatic vehicle locating system (AVLS) and mobile data terminals (MDT) was used to perform automatic communication with ambulances.

Some of the major reasons for the failure of the London ambulance system can be stated as: • The deadline given for the completion of the project was six months. The project of such big magnitude cannot be completed within a small deadline. • The software was not fully developed and incomplete. The individual modules were tested, but the software was not tested fully as a integrated system. • The resilience of the hardware under a full load condition had not been tested before the deployment of the software. The flash cut over strategy was used to implement the system which was a high risk and moreover it didn’t have any backup systems to revert on failure. • Inappropriate and unjustified assumptions were made during the specification process of the project. Some of the few assumptions that were made are : ? Complete accuracy and reliability of the hardware system. ? Perfect location and status information. ? Cooperation of all operators and ambulance crew members. • Lack of consultation with the prospective users of the system and subject matter experts. The Software requirement specification was excessively prescriptive, incomplete and not formally signed off. • The London Ambulance system underestimated the difficulties involved in the project during the project blastoff phase. • Inadequate staff training. The crew members were not fully trained on the operation of the new software and their prior experience was not used in the newly developed software. The Report of the Inquiry into the London Ambulance Service by Anthony Finkelstein also gives us more information about the failure of the system. Some of the are listed below as follows: It states that “the CAD system implemented in 1992 was over ambitious and was developed and implemented against an impossible timetable”. • In addition, the LAS Committee got the wrong impression, that the software contractor had prior experience in emergency systems; this was misleading in awarding the contract to systems options. • Project management throughout the development and implementation process was inadequate and at times ambiguous. A major project like this requires a full time, professional, experience project management which was lacking. The computer system did not fail in a technical sense, the increase in calls on October 26 and 27 1992 was due to unidentified duplicate calls and call backs from the public in response to ambulance delays. • “On 4th November 1992 the system did fail. This was caused by a minor programming error that caused the system to crash”. VIRTUAL CASE FILE SYSTEM The primary goal of the Virtual case file (VCF) system was to automate the process of FBI paper based work environment, allow agents and intelligence analysts to share vital investigative information, and replace the obsolete Automated Case Support (ACS) system.

In ACS tremendous time is spend in processing paperwork, faxing and Fedexing standardized memo. Virtual case file (VCF) system was aimed at centralizing the IT operations and removes the redundancy present in various databases across the FBI system. In September 2000 the FBI Information technology upgrade project was underway. It was divided into three parts. • The Information Presentation Component • The Transportation Network Component • User Application Component The first part involved distribution of new Dell computers, scanners, printers and servers.

The second part would provide secure wide area networks, allowing agents to share information with their supervisors and each other. The third part is the virtual case file. The Virtual Case File system project was awarded to a US government contractor, Science Applications International Corporation (SAIC). The FBI used cost plus – award fee contracts. This project was of great importance because the FBI lacked the ability to know what it knew; there was no effective mechanism for capturing or sharing its institutional knowledge. This project was initially led by former IBM Executive Bob E. Dies. On 3th December 2003, SAIC delivered the VCF to FBI, only to have it declared dead on arrival. The major reasons for the failure of the VCF system can be summarized as: • The project lacked clearly defined schedules and proper deadlines, there was no formal project schedules outlined for the project and poor communication between development teams that was dividing into eight teams to speed up the project completion. • The software engineering principle of reusing the existing components was ignored. SAIC was developing a E – mail like system even though FBI was already using an off – the – shelf software package. The deployment strategy followed in implementing the system was flash -cutover. It is a risky way a deploying a system as the system would be changed in a single shot. • The project violated the first rule of software planning of keeping it simple. The requirement document was so exhaustive that rather of describing the function what it should perform it also stated how the functions should be implemented. • Developers coded the module to make individuals features work but were not concerned about the integration of the whole system together.

There was no coding standards followed and hence there was difficulty in the integration process. • The design requirement were poorly designed and kept on constantly changing through the development phase. The high level documents including the system architecture and system requirements were neither complete nor consistent. • Lack of plan to guide hardware purchases, network deployments, and software development. • Appointment of person with no prior experience in management to manage a critical project such as this was grave mistake, appointment of Depew as VCF project manager. Project lacked transparency in the work within the SAIC and between SAIC and the FBI. • Infrastructure including both the hardware and network was not in place to test thoroughly the developed virtual case file system by SAIC which was essentially needed for flash cut off deployment. • The requirement and design documentation were incomplete, imprecise, requirement and design tracings have gaps and the maintenance of software was costlier. • According to the report by Harry Goldstein, “there was 17 ‘functional deficiencies’ in the deployed Virtual Case File System”.

It didn’t have the ability to search for individuals by specialty and job title. All these above factors contributed to the failure of the Virtual Case File System which wasted a lot of public tax payers’ money. AUTOMATIC BAGGAGE SYSTEM The automatic baggage system designed for the Denver International Airport is a classic example of a software failure system in the 1990’s. With a greater airport capacity, the city of Denver wanted to construct the state of art automatic baggage handling system. Covering a land area of 140 square kilometer the Denver airport has 88 airport gates with 3 concourses.

The fully automated baggage system was unique in its complexity because of the massive size of the airport and its novel technology. The three other airports that have such systems are the San Francisco International Airport, International airport in Frankfurt and the Franz Joseph Strauss Airport in Munich. This project is far more complex than any other projects, because it has 12 times as many carts as in exiting comparable system . The contract for this automatic baggage system was given to BAE automated systems. In 1995 after many delays, the baggage system project was deployed, which was a major failure.

The baggage carts derailed, luggage was torn and the system completely failed. But the system was redesigned with lesser complexity and opened 16 months later. GOALS OF THE PROJECT: The system calls for replacing the traditional slow conveyor belts with telecars that roll freely on underground tracks. It was designed to carry up to 70 bags per minute to and from baggage check-in and checkout at speed up to 24 miles/hour. This would allow the airlines to receive checked baggage at their aircraft within 20 minutes. The automatic baggage system was a critical because the aircraft turnaround time was to be reduced to as little as 30 minutes.

The faster turnaround time meant more quickly the operations and it increases the productivity. The installers are quoted has having planned “a design that will allow baggage to be transported anywhere within the terminal within 10 minutes”. PROJECT SCOPE: The International airport at Denver three concourses and initially it aimed at automating all the three concourses. But later the concourse B was alone designed to be made automatic. The project was later redefined to handle only outbound baggage. It does not deal with the transfer of bags. STAKE HOLDERS:

The major stake holders in the project can be identified as: • The Denver International Airport Management. • The BAE Automated Systems. • The Airline Management. The project blastoff according to Robertson & Robertson states that during this phase it has to identify all the stakeholders and ask their inputs for the requirements. In the ABS System the Airline Management was not made to involve in the blastoff meetings to provide their inputs and excluded from the discussions. As well as the risk should be analyzed properly during the blast off which was also a draw back in this system.

This was a perfect example of failure to perform risk management. The cost estimation of the project was incorrect as it exceeded the estimated cost during the development. So, Aspects in which the project blastoffs were not addressed can be summarized as follows: • The underestimation of complexity • Poor stakeholder management • Poor Design • Failure to perform risk management There were only three “intense” working session to discuss the scope of the project and the agreement between the airport management and BAE automated systems.

Although BAE automated systems had been working in the construction of the baggage system in concourse B for United Airlines, the three working session is not sufficient to collect all the requirements for the construction of the automate baggage systems. This shows clearly a poor software engineering principle because requirements are the key base factors for the project to be built upon. Reports indicate that the two year deadline for the construction of the automatic baggage system is inadequate. The reports that showed that project required more than two years are as follows: “The complexity was too high for the system to be built successfully” by The Baggage System at Denver: Prospects and Lesson – Dr. R. de Neufville Journal of Air Transport Management, Vol. 1,No. 4, Dec, pp. 229-236,1994 • None of the bidders quoted to finish the project within two years. • Experts from Munich airport advised that a much simpler system had taken two full years to complete and it was system tested thoroughly six months before the opening of the Munich airport. Despite all this information the decision to continue with a project was not based on the sound engineering principles.

ABS REQUIREMENT DESIGN AND IMPLEMENTATION The Automatic Baggage System constructed by the Airport Management was a decision taken two years before the opening of the new Denver International Airport. Initially the concourse B meant for United Airlines was supposed to be constructed by the BAE Automated Systems and all other airlines had to construct their own baggage handling mechanism. Later the responsibility was taken by the Denver Airport Management to construct the Automatic Baggage System.

The integrated nature of the ABS system meant that airport looks after its own facility and has a central control. The BAE plan to construct for the concourse B was expanded to the other three concourses which was a major change in the strategy of the airport construction. Moreover the airport management believed that an automated baggage system would be more cost effective than manual system given the size of the massive airport. During the development phase the requirements kept on changing which added additional complexity to the project. Though in the contract there was learly statement no change in requirement would be accommodated, they accepted the changes to meet the stakeholder needs. For example the addition of the ski equipment racks and the addition of maintenance track to allow carts to be serviced without being removed from the rails and able to handle oversized baggage. The baggage system and the airport building shared physical space and services such as the electrical supply. Hence the designers of the physical building and the designers of the baggage system needed to work as one integrated team with lot of interdependency.

Since the construction of the airport was started initially the building designers made general allowances in the place where they thought the baggage system would come into place. Hence the designers of the automatic baggage system have to work with the constraints that have already been placed. For example sharp turns were supposed to be made due to the constraints placed and these were one of the major factors for the bags to be ejected from the carts. The design of the automatic baggage system “Systematic Biases and Culture in Project Failures”, a Project Management Journal is as follows. Luggage was to be first loaded onto the conveyor belts, much as it is in conventional baggage handling system. • These conveyors would then deposit the luggage in the carts that were controlled by computers. • The luggage would travel at 17 miles per hour to its destinations, as much as one mile away. • The automatic baggage system would include around 4000 baggage carts travelling throughout the airport under the control of 100 computers with processing power up to 1400 bags per minute. However the design with the above architecture failed as it was not able to handle variable load.

It was also suffering from various problems they are identified as: • The software was sending carts out at the wrong times, causing jams and in many cases sending carts to the wrong locations. • The baggage system continued to unload bags even though they were jammed on the conveyor belt. • The fully automated system may never be able to deliver bags consistently within the times and at the capacity originally promised. • In another case the bags from the aircraft can only be unloaded and loaded into the unloading conveyor belt is moving, this belt moves only when there are empty carts.

Empty carts will only arrive after they have deposited previous loads; this is a cascade of queues. • Achieving high reliability also depends on the mechanical and the computers that controlled the baggage carts’ reliability. • Errors may occur during reading or transmitting information about the destinations. There may be various scenarios during which these errors can take place. Some of them are listed as below. 1. The baggage handler may place the bag on the conveyor with the label hidden. 2. The baggage may have two labels on it. one from the previous flight. 3. The labels may be mutilated or dirty. . The label may not lie in the direction of the view of the laser reader. 5. The laser may malfunction or the laser guns stop reading the labels. • The reading of information is vital in the automatic baggage system since the whole system is dependent on the information transmitted from reading of the labels and this information must be transmitted by radio to devices on each of the baggage carts. • There is no available evidence of effective alternative testing of the capability of the system to provide reliable delivery to all destinations under variable patterns of load.

This variable demand made in the system is famously called as the line balancing problem. That is, it is crucial to control the capacity of the system so that all lines of flow have balanced service. This problem can be avoided by eliminating situations where some lines get little or no service, to avoid the possibility that some connections simply do not function or in other words control the emptiness. This failure also was because the entire system was developed within a two year deadline and hence the automatic baggage system was not testing completely with variable loads.

Lack of testing also is a major reason for this failure. These all are the major factors that led to the failure of the automatic baggage system in Denver international airport. Subsequently a much less complex system was design and implemented sixteen months later. This newly designed system had the following functionality as follows: • Serve only one concourse, the concourse B for United Airlines. • Operate on half the planned capacity on each track. • Handle only outbound baggage at the start. • Not deal with transfer bags. COMPARISON OF ABS, VCF and LAS PROJECTS All the management teams of the three projects wanted the software system to be built quickly without taking into consideration of the system requirement. • Hence all the system had unrealistic deadline to be met. • Because of these unrealistic deadlines the system didn’t follow proper software engineering standards and principles. • In all the three projects during the project blastoff phase the requirements gathering activity was not proper and incomplete, due to which the requirements kept on changing during the development phase. • Lack of consultation with the stake holders and prospective users. All the three projects Software requirement specification was excessively prescriptive, incomplete and not formally signed off. • All the three systems were not properly tested before deployment due to lack of time and tight schedules. The timeline was not reasonable for any of the projects. • There was poor communication between the developers, customers and the clients in all the projects. • The identification of the stake holders and collecting requirements from the stake holders and subject matter experts was not proper and incomplete. ASPECTS |ABS |VCF |LAS | |DEPLOYMENT STRATEGY |It was deployed in a single phase|Flash Cutover strategy was used in|Flash Cutover strategy was used | | |with a major failure of the |replacing the ACS System |in replacing the existing System | | |system | | | |PROJECT SCHEDULE/DEADLINE |Had a very tight schedule of two |Over ambitious schedule |Had a very tight deadline, two | | |years to implement | |years(1990 – 1992) | |PROJECT PLANNING |Poor Planning, The system was |Poor Planning and constantly |Good Engineering practice were | | |decided to be developed two years|changing milestones |Ignored | | |before the completion of the | | | | |airport | | | |SOFTWARE REQUIREMENT SPECIFICATION |Kept on changing to meet the |Slowly changing design |On the fly code changes and | | |needs of the stake holders |requirements |requirement changes | |PROJECT BLASTOFF |There was only three intense |The project blastoff phase didn’t |It left out the view of the | | |session to collect the |collect all the requirements |customers and subject matter | | |requirements which is inadequate |properly |experts | |REUSABLITY |This system didn’t have any back |They already had e-mail like |The existing communication | | |up system to reuse |system which could have been |devises in the ambulance system | | | |reused but new mail system was | | | | |written | | |CODING/TESTING |The system was not tested with |The software system followed the |Backup dispatch system not tested| | |variable load |spiral developmental model and not|and the overall software not | | | |tested as a whole |system tested | |SYSTEM DESIGN |The system design was too complex|The system was not base lined and |The System design was incomplete | | | |kept on changing | | |BUGS |System was unable to detect bugs |59 issues and sub issues were |81 Know Bugs in the Deployed | | | |identified |System | |ASSUMPTIONS/ |It was dependent on computers |No major assumptions were made in |Perfect location information and | |DEPENDENCY |that controlled the baggage cars |this project |dependent on the MDT | | | | |communications | PERSONAL REFLECTION: • After reading all the three projects I now understand that development of software not necessary has to be coding the software properly but there are various aspects apart from coding like requirement gathering, risk analysis, testing. • The requirements gather should plays a vital role in software development and it has to be properly made in consultation with all the stakeholders, customers of the software. • Understanding the complexity of the software being developed. • Proper planning and schedule of events for the development activities. Deadlines for the software development should be realistic and achievable • Use of any of the software engineering models for the development like waterfall model, Bohms’ spiral model, incremental work flow model or agile software development. • Last but not the least the software developed should be thoroughly tested for finding out flaws in the development and fixing them. REFERENCES: 1. H. Goldstein. Who Killed the Virtual Case File? IEEE Spectrum, Sept. 2005, pp. 24–35. 2. Statement of Glenn A. Fine, Inspector General, US Dept. of Justice, 27 July 2005. 3. A. Finkelstein and J. Dowell. A Comedy of Errors: the London Ambulance Service Case Study. Proc. 8th Int.

Workshop on Software Specification and Design (IWSSD96), pp. 2–4, Velen, Germany, 1996. 4. Report of the Inquiry into the London Ambulance Service (February 1993), International Workshop on Software Specification and Design Case Study. Electronic Version Prepared by A. Finkelstein, with kind permission from the Communications Directorate, South West Thames Regional Health Authority. 5. Richard de Neufville. “The Baggage System at Denver: Prospects and Lessons,” Journal of Air Transport Management, Vol. 1, No. 4, Dec. 1994, pp. 229–236. 6. Barry Shore. “Systematic Biases and Culture in Project Failures,” Project Management Journal, Vol. 39, No. 4, 2008, pp. 5–16.

Categories
Free Essays

Thesis on Software Project Risk Management

ABSTRACT Title of Thesis: A SYSTEMS MODELING DESIGN UTILIZING AN OBJECT-ORIENTED APPROACH CONCERNING INFORMATION RISK MANAGEMENT Noriaki Suzuki Master of Science in Systems Engineering, 2005 Fall Nelson X. Liu, Assistant Research Scientist, Institute for Systems Research Degree candidate: Degree and year: Thesis directed by: Adopting advanced information technologies within the present broad application fields requires precise security. However, security problems regarding information privacy have occurred frequently over the last 5 years despite the contribution of these technologies.

To respond to the need for securing information privacy, the Information Privacy Law was enacted on April 1, 2005 in Japan. One of the responses to this law enforcement is demanding a higher level of information risk management and search for more effective tools to be used for identity protection and problem-solving. Two examples of these tools include RAPID and IRMP. However, there is no established system-development model for either of these tools. Further developments to improve the RAPID and IRMP remain as new challenges.

In this thesis, a new approach on developing a system security model to be used for information risk management is proposed. To demonstrate this approach, the object-oriented language is used. A SYSTEMS MODELING DESIGN UTILIZING AN OBJECT-ORIENTED APPROACH CONCERNING INFORMATION RISK MANAGEMENT By Noriaki Suzuki Thesis submitted to the Faculty of the Graduate School of the University of Maryland, College Park in partial fulfillment of the requirements for the degree of Master of Science 2005 Fall Advisory Committee: Dr. Nelson X.

Liu, Assistant Research Scientist, Institute for Systems Research Professor Eyad Abed, Director of the Institute for Systems Research Professor Michel Cukier, Assistant Professor, Reliability Engineering © Copyright by Noriaki Suzuki 2005 ACKNOWLEDGEMENTS I would like to express my honest thanks to my advisor, Dr. Nelson X. Liu, for his strong direction and guidance throughout this work. I could not have completed the thesis without his constant advice and help. I would also like to thank my secondary advisor Lee Strickland. He helped me in analyzing current security breaches and investigating the information risk assessment in chapter 3.

I would also like to thank Prof. Eyad Abed and Prof. Michel Cukier for agreeing to serve on my committee. Furthermore, I am grateful to my friends Sivakumar Chellathurai, Suchee Nathan, and Brent Anderson for their help and valuable comments in written form of this thesis. Many thanks to Jonathan Eser, Daniela Villar del Saz, Gokul Samy, and Pampa Mondal for their friendship and support throughout my studies in Maryland; without their support I probably would not be where I am now. Finally, I am grateful to the systems engineering program at the University of Maryland, College Park.

This study has equipped me with the skills I need in order to make a more significant contribution to the world and offer the tools I will need to overcome obstacles I may face in the future. ii TABLE OF CONTENTS 1. INTRODUCTION …………………………………………………………………………………………….. 1 2. BACKGROUND ………………………………………………………………………………………………. 7 2-1. Review of Existing Studies………………………………………………………………………….. 2-2. Information Risk Management …………………………………………………………………… 10 2-3. Systems Modeling…………………………………………………………………………………….. 15 2-3-1. Meta Model……………………………………………………………………………………….. 16 2-3-2. UML…………………………………………………………………………………………………. 18 3. RISK ASSESSMENT USING THE CURRENT SECURITY ISSUES…………………… 21 3-1.

Risk Assessment Methodology …………………………………………………………………… 21 3-2. Security Risk Assessment ………………………………………………………………………….. 29 3-3. Suggestions for Preventing Security Breach…………………………………………………. 48 4. STATIC AND DYNAMIC INFORMATION POLICIES………………………………………. 51 4-1. Definition ………………………………………………………………………………………………… 51 4-2.

Static Information Policy …………………………………………………………………………… 57 4-3. Dynamic Information Policy………………………………………………………………………. 60 4-3-1. Sample Dynamic Policy 1 (Dynamic/ Confidentiality, Availability/ Access Control, Intrusion Detection/ [SB-7],[SB-12])…………………………………………………. 60 4-3-2. Sample Dynamic Policy 2 (Dynamic/ Availability, Accountability/ Intermediate Control, Intrusion Detection/ [SB-12], [SB-13])……………………………. 7 5. DEVELOPING THE SECURITY MODEL WITH UML……………………………………… 81 5-1 Sample System Overview …………………………………………………………………………… 81 5-1-1. System Boundary ………………………………………………………………….. …………… 81 5-1-2. Use Case …………………………………………………………………………………………… 83 5-1-3. Scenarios …………………………………………………………………………………………… 4 5-2. Structural System Model corresponding to the information policies ……………….. 91 5-2-1. Class Description ……………………………………………………………………………….. 91 5-2-2. Class Diagram……………………………………………………………………………………. 92 5-2-3. Object Description ……………………………………………………………………………… 94 5-2-4. Object Diagram………………………………………………………………………………….. 95 5-3.

Behavioral System Model corresponding to the information policies…………….. 101 5-3-1. Activity Diagram………………………………………………………………………………. 102 5-4. Systems Verification ………………………………………………………………………………… 111 5-4-1. Test Data ………………………………………………………………………………………….. 111 5-4-2. Data Analysis …………………………………………………………………………………… 113 5-4-3.

Improvements ………………………………………………………………………………….. 121 5-4-4. Results…………………………………………………………………………………………….. 124 6. CONCLUSIONS……………………………………………………………………………………………. 143 7. FUTURE EFFORT ………………………………………………………………………………………… 145 iii APPENDIX A: SUMMARY OF SECURITY BREACH ………………………………………… 46 REFERENCES ………………………………………………………………….. …………………………….. 155 iv LIST OF TABLES Table 1: Relative Cost to Correct Security Defects by Stage………………………………………………….. 3 Table 2: Security Defects by Category ……………………………………………………………………………….. 3 Table 3: Comparison of the Approach of This Thesis to Other Approaches……………………………. 10 Table 4: Terms for Risk Measurement ………………………………………………………………………………. 2 Table 5: Probability Levels of an Undesired Event …………………………………………………………….. 25 Table 6: Severity Levels of Undesired Event Consequences………………………………………………… 25 Table 7: Risk Assessment Matrix……………………………………………………………………………………… 26 Table 8: Security Levels of Undesired Event for an Asset in Information Risk Assessment……… 26 Table 9: Rating for the Probability of Occurrence ………………………………………………………………. 7 Table 10: Rating for the Security Level …………………………………………………………………………….. 28 Table 11: Category Table for Security Breaches…………………………………………………………………. 30 Table 12: Each Assessment Rating …………………………………………………………………………………… 31 Table 13: Asset Assessment Worksheet …………………………………………………………………………….. 4 Table 14: Rating Table for Threat Assessment……………………………………………………………………. 36 Table 15: Threat Assessment Worksheet………………………………………………………….. ……………….. 38 Table 16: Rating Table for Vulnerability Assessment ………………………………………………………….. 39 Table 17: Vulnerability Assessment Worksheet ………………………………………………………………….. 42 Table 18: Risk Assessment and Countermeasure Options Worksheet ……………………………………. 5 Table 19: Security Properties …………………………………………………………………………………………… 53 Table 20: Access Control Matrix ……………………………………………………………………………………… 53 Table 21: Classification for A Sample Information Policy …………………………………………………… 56 Table 22: Classification for NTFS ……………………………………………………………………………………. 7 Table 23: Classification for Information Policy 1 (Static) ……………………………………………………. 58 Table 24: Classification for Information Policy 2 (Static) ……………………………………………………. 58 Table 25: Rule for each Warning Level for Sample Dynamic Information Policy 1 ………………… 63 Table 26: Statistical Analyzed Max Number of Accesses I ………………………………………………….. 65 Table 27: Statistical Analyzed Max Number of Accesses II …………………………………………………. 5 Table 28: The Final Estimated Statistical Analyzed Max Number of Accesses……………………….. 66 Table 29: Rule each Audit Level for Sample Dynamic Information Policy 2 …………………………. 71 Table 30: Audit Data 1 for the Sample Dynamic Information 2, Day 1 in the (dy) Period ……….. 73 Table 31: Statistical Data 1 for the Sample Dynamic Information 2, Day 1 in the (dy) Period … 74 Table 32: Audit Data 1 for the Sample Dynamic Information 2, Day 2 in the (dy) Period ……….. 74 Table 33: Statistical Data 1 for the Sample Dynamic Information 2, Day 2 in the (dy) Period….. 4 Table 34: Object Confidential Level…………………………………………………………………………………. 75 Table 35: Audit Data 2 for the Sample Dynamic Information 2, Day 1 in the (dy) Period ……….. 75 Table 36: Statistical Data 2 for the Sample Dynamic Information 2, Day 1 in the (dy) Period….. 75 Table 37: Audit Data 2 for the Sample Dynamic Information 2, Day 2 in the (dy) Period ……….. 76 Table 38: Statistical Data 2 for the Sample Dynamic Information 2, Day 2 in the (dy) Period….. 6 Table 39: Summary of the Statistical Variables in the Period ……………………………………………….. 77 Table 40: Procedure Output Statistical Variable in the Period ………………………………………………. 77 Table 41: The Final Estimated Statistical Analyzed Audit Level…………………………………………… 78 Table 42: Identified Objects and Class Table……………………………………………………………………… 92 Table 43: Class Description for the Security System Model ………………………………………………… 2 Table 44: Object Description for the Security System Model ………………………………………………. 94 Table 45: Load Access Control Matrix for the Security System Model …………………………………. 98 Table 46: Objects and Data for Security Mechanism without Policy for Countermeasure 1 …… 126 Table 47: Objects and Data for Security Mechanism Static Policy for Countermeasure 1………. 128 Table 48: Objects and Data for Security Mechanism Dynamic Policy 1………………………………. 30 Table 49: Objects and Data for Security Mechanism without Policy for Countermeasure 2 …… 135 v Table 50: Objects and Data for Security Mechanism Static Policy for Countermeasure 2………. 138 Table 51: Objects and Data for Security Mechanism Dynamic Policy 2………………………………. 139 Table 52: Validation of Information Policies ……………………………………………………………………. 141 LIST OF FIGURES Figure 1: Depicting for Information Risk ………………………………………………………………….. ……… 2 Figure 2: Overview of Information Risk Management Model ……………………………………………… 14 Figure 3: Meta Model Architecture…………………………………………………………………………………… 16 Figure 4: UML 2. 0 Architecture……………………………………………………………………………………….. 19 Figure 5: Process Flow to Develop the System Model ………………………………………………………… 20 Figure 6: Information Risk Profile……………………………………………………………………………………. 1 Figure 7: Structure of Workflow for Information Policy Setting …………………………………………… 29 Figure 8: Graphical Representative of Probability of Risk Occurrence …………………………………. 46 Figure 9: Graphical Representative of Information Asset…………………………………………………….. 46 Figure 10: Graphical Representative for Risk Level …………………………………………………………… 47 Figure 11: Methodical Categorization of Information Security Policy…………………………………… 2 Figure 12: Access Control Model …………………………………………………………………………………….. 53 Figure 13: Protection Rings …………………………………………………………………………………………….. 54 Figure 14: Audit Data for the Sample Dynamic Information 1, Day 1 in the (dy) Period…………. 63 Figure 15: Audit Data for the Sample Dynamic Information 1, Day 2 in the (dy) Period…………. 64 Figure 16: Audit Data for the Sample Dynamic Information 1, Day 3 in the (dy) Period…………. 4 Figure 17: Cross Table of The Number of Accesses in the (dy) period ………………………………….. 65 Figure 18: Sample Event Log from Apr 18 to Apr 19 in 2005 ……………………………………………… 72 Figure 19: The Crucial Event Log on Apr 18, 2005 ……………………………………………………………. 78 Figure 20: The Crucial Event Log on Apr 19, 2005 ……………………………………………………………. 79 Figure 21: Sample Credit Card Online System…………………………………………………………………… 3 Figure 22: Use Case for the Secured System Model …………………………………………………………… 84 Figure 23: Class Diagram for the Security System Model (Part I) ………………………………………… 93 Figure 24: Class Diagram for the Security System Model (Part II)……………………………………….. 94 Figure 25: Object Diagram :: UserPC Class ………………………………………………………………………. 96 Figure 26: Object Diagram :: UserInf Class ………………………………………………………………………. 7 Figure 27: Object Diagram :: ACM Class………………………………………………………………………….. 97 Figure 28: Object Diagram :: Dynamic Policy Class…………………………………………………………… 99 Figure 29: Object Diagram for the Security System Model ……………………………………………….. 101 Figure 30: Activity Diagram for User Login (Part I) …………………………………………………………. 102 Figure 31: Activity Diagram for User Login (Part II)………………………………………………………… 03 Figure 32: Activity Diagram for User Access (Part I) ……………………………………………………….. 104 Figure 33: Activity Diagram for User Access (Part II) ………………………………………………………. 105 Figure 34: Activity Diagram for Dynamic Policy 1 (Part I) ……………………………………………….. 107 Figure 35: Activity Diagram for Dynamic Policy 1 (Part II) ………………………………………………. 108 Figure 36: Activity Diagram for Dynamic Policy 2 (Part I) ……………………………………………….. 09 Figure 37: Activity Diagram for Dynamic Policy 2 (Part II) ………………………………………………. 110 Figure 38: Required Material to Generate Test Data …………………………………………………………. 112 Figure 39: Datasheet of Number of Accesses by User 1, 2, and 3 ……………………………………….. 114 Figure 40: Datasheet of Totaled Number of Accesses each Day …………………………………………. 114 vi Figure 41: Datasheet of Procedure Process for Dynamic Information Policy 1 …………………….. 15 Figure 42: Datasheet of Frequency of Access of User……………………………………………………….. 116 Figure 43: Datasheet of Occupancy Rate of User……………………………………………………………… 117 Figure 44: Datasheet of Frequency of Access to an Object ………………………………………………… 118 Figure 45: Datasheet of Totaled Number of Accesses and Maximum Occupancy each Day …… 119 Figure 46: Datasheet of Totaled Frequency of Access to an Object …………………………………….. 19 Figure 47: Datasheet of Procedure Process for Dynamic Information Policy 2 …………………….. 120 Figure 48: Audit Level Matrix in Test Data ……………………………………………………………………… 120 Figure 49: Datasheet of Number of Accesses on Feb. 12, 2003 ………………………………………….. 121 Figure 50: Datasheet of Procedure Process for User 1 ………………………………………………………. 122 Figure 51: Datasheet of Procedure Process after Improvement…………………………………………… 22 Figure 52: Example of Revision for Maximum Number of Accesses for each Warning Level… 123 Figure 53: Audit Level Matrix in Test Data after Revision…………………………………………………. 124 Figure 54: SCD of Security Mechanism without Policy for Countermeasure 1…………………….. 125 Figure 55: SCD of Result without Policy for Countermeasure 1…………………………………………. 126 Figure 56: SCD of Security Mechanism with Static Policy 1……………………………………………… 27 Figure 57: SCD of Result with Static Policy 1 …………………………………………………………………. 129 Figure 58: SCD of Security Mechanism with Dynamic Policy 1 ………………………………………… 130 Figure 59: SCD of Result with Dynamic Policy 1…………………………………………………………….. 133 Figure 60: Datasheet of Objects and Data from Test Data………………………………………………….. 134 Figure 61: SCD of Security Mechanism without Policy for Countermeasure 2…………………….. 35 Figure 62: SCD of Result without Policy for Countermeasure 2…………………………………………. 136 Figure 63: SCD of Security Mechanism with Static Policy 2……………………………………………… 137 Figure 64: SCD of Result with Static Policy 2 …………………………………………………………………. 138 Figure 65: SCD of Security Mechanism with Dynamic Policy 2 ………………………………………… 139 Figure 66: SCD of Result with Dynamic Policy for Countermeasure 2 ……………………………….. 140 ii 1. Introduction With the advent of the digital-age, a coherent information policy is required to support the rapid flow of information. Internet technologies have changed the face of both business and personal interaction. Introducing a system to the Internet results in immediate world scale network exposure. Due to this exposure, information trade is easy and fast. However, this exposure also results in security breaches regarding information leak, which occur daily throughout the world. In Japan, the total number of such problems last year amounted to 2297 instances.

Just from January to July of 2005, the number of Internet privacy losses already reached 7009 cases 1 . To address this vulnerability, the Information Privacy Law was enacted on April 1, 2005 in Japan2. After the law was enacted, an onslaught of system security products were released with many more on the way3; however, the products will not be effective unless both system developers and users have an understanding about what kind of information policy that the system requires and how they should deal with the information policy in development security systems4.

According to the article “The Way to Develop the Secure Information The present state of privacy information breach problem http://www. ahnlab. co. jp/virusinfo/security_view. asp? news_gu=03&seq=86&pageNo=4 Ministry of Internal Affair and Communication information privacy law site, http://www. soumu. go. jp/gyoukan/kanri/kenkyu. htm Systemwalker Desktop keeper, Fujitsu http://systemwalker. fujitsu. com/jp/desktop_keeper/ / IPLOCKS Information Risk Management Platform, IPLOCKS, 2 pages http://www. iplocks. com/images/newsarticles/3customers_final_052705. df Overreacted t o the information privacy law, Yoshiro Tabuchi, NIKKEI BP in Japan, July. 5 2005, http://nikkeibp. jp/sj2005/column/c/01/index. html? cd=column_adw 4 3 2 1 1 Risk Management System”5, based on the analysis of 61 cases of privacy information security breach problems published on the Net Security site6, less than 10% of problems are actually caused by hacker attacks. Nearly all (90%) problems originate at the design stage (41%) and maintenance (52. 5%) stage. This demonstrates the lack of regard for information risk management at the design and maintenance stage.

The same article concludes that the problems caused at the design stage are due to a lack of understanding on the part of the systems engineer; and security risks compounded as systems near completion. The same poor understanding of security and information risk management on the part of operators is responsible for delayed responses during maintenance. In addition, the report – “Information security: why the future belongs to the quants”7 indicates that developers should work on quality early in the process, where cost is lower as Table 1 shows. A cost to correct at the design stage is the lowest among the all stages.

On the other hand, a cost at the maintenance stage is the highest; it is almost more 100 times than the cost at the design stage. In addition, security defects also tend to occur at certain design stages more than other stages as depicted in Table 2. The way to develop the secure information risk management system, D add ninth Co. , Ltd. Oct 1 2002, http://www. dadd9. com/tech/sec4manager. html 6 5 NetSecurity, Livin’ on the EDGE Co. , Ltd. & Vagabond Co. ,Ltd. , , https://www. netsecurity. ne. jp/ Information security: why the future belongs to the quants, Security & Privacy Magazine, IEEE, July-Aug. 003, Volume 1, Issue 4, Page 24 –32 7 2 STAGE Design Implementation Testing Maintenance RELATIVE COST 1. 0 6. 5 15. 0 100. 0 Table 1: Relative Cost to Correct Security Defects by Stage8 CATEGORY Administrative interfaces Authentication/access control Configuration management Cryptographic algorithms Information gathering Input validation Parameter manipulation Sensitive data handling Session management Total ENGAGEMENTS WHERE OBSERVED 31% 62% 42% 33% 47% 71% 33% 33% 40% 45% DESIGN RELATED 57% 89% 41% 93% 51% 50% 81% 70% 94% 70% SERIOUS DESIGN FLAWS* 36% 64% 16% 61% 20% 32% 73% 41% 79% 47%

Table 2: Security Defects by Category9 The categories printed in bold in the above table relate to information leak problems. As the above evidences show, it is crucial to be concerned about security issues at the design stage in developing systems. To overcome the abovementioned problems, several information risk management tools such as RAPID10 and IRSP11 have been developed. Information security: why the future belongs to the quants, Security & Privacy Magazine, IEEE, July-Aug. 2003, Volume 1, Issue 4, Page 26 Information security: why the future belongs to the quants, Security & Privacy Magazine, IEEE, July-Aug. 003, Volume 1, Issue 4, Page 29 10 9 8 Information Security Program Development Using RAPID, http://www. nmi. net/rapid. html 11 Information Risk Management Program, CSC CyberCare, 5 pages http://www. csc. com/industries/government/knowledgelibrary/uploads/807_1. pdf 3 RAPID is used for defining necessary business processes and developing guidelines to develop a security system. IRSP provides information risk management programs to support and improve existing client security systems. However, these tools identify only the management steps needed to enforce the existing security system and do not expand on the methodology and models ecessary for developing new systems. In developing new systems, which require a significant information policy, the methodology, approach, system modeling, and system model verification must be clearly outlined. This outline significantly helps a systems engineer to work on quality at the early stage in developing systems. This thesis focuses on an information risk assessment, method of developing information policy, modeling system and system model verification, all of which must be considered in the design stage.

Newly developed systems must be forward compatible with new technologies for threats that may occur during the maintenance stage. It is required for the design to have the following: Suitable for the addition of various security components Re-useable security components for various systems Provide common understanding among system developers Object-oriented design is the most suitable design methodology to overcome the aforementioned requirements, and as the premiere meta-modeling language for analysis and design in the software engineering community, the Unified Modeling Language (UML)12 will be used.

For example, modeling the systems with UML creates a common 12 OMG (Object Management Group) official site, Unified Modeling Language, http://www. uml. org/ 4 understanding for both developers and user domain experts. A better understanding of how the user systems function facilitates the detection of security problems early in system development. The following are benefits of using the system developing approach and methodology involving the information risk management model discussed in this thesis. Prioritizes security risk solutions: Information risk ssessment helps developers to realize highly required security issue and provide risk solutions. Secures the privacy information: A well-grounded countermeasure and its security solution based on a risk assessment make systems to be secure. Reduces errors: Information risk management, modeling systems, systems model verification at the design stage reduce security defects efficiently and effectively. Facilitates updates for new security threats: A system modeling with UML facilitates developers to update and reuse components in systems, and provides high compatibility to other systems.

Provides understandable system design documents: Information risk management with simple concept and design document using UML provides well understanding to any developers and stakeholders. In this thesis, information risk management is introduced with current security studies and a new model, which is developed herein. Next, following the presentation of this new model for information risk management, 20 current security breach issues will be analyzed and assessed using the risk management assessment method. Then, three worst-case security breach issues will be addressed.

Third, information policies will be developed using countermeasures in response to the worst-case security breaches, and then security mechanisms based on these information policies will be shown. Fourth, system modeling will be performed with UML. Fifth, security mechanisms based on 5 dynamic information policies developed in this thesis will be verified. In addition, validation of the system model is shown. A state chart diagram in a case without policy, with static policy, and with dynamic policy is developed. The cases are then validated with UPPALL13.

UPPALL is an integrated tool environment for modeling, validating and verifying real-time systems modeled as networks of timed automata. Finally, this thesis will conclude by demonstrating how to develop the secured system model discussed in this thesis. This thesis contributes to systems engineering by providing a framework for handling security issues faced by enterprises managing secure information. 13 UPPAAL homepage, http://www. uppaal. com/ 6 2. Background 2-1. Review of Existing Studies In this report, outlining a methodology and modeling for developing systems to prevent information leaks are the main themes.

Such an effort is significant for systems engineers in developing systems and similar studies are available. In this section, three similar existing studies are reviewed. Their approaches are analyzed and an outstanding methodology with model-based risk assessment is used in developing systems. 1. Information Flow Analysis of Component-Structured Applications, Peter Herrmann, 200114 The diversity and complexity of information flow between components pose the threat of leaking information. Security analysis must be performed in order to provide suitable security solutions.

Systems are audited for vulnerabilities, threats, and risks. Based on the audit, effective safeguards are selected, designed, and configured. However, since information flow analysis tends to be expensive and error-prone, object oriented security analysis and modeling is utilized. It employs a UML-based object-oriented modeling techniques and graph rewriting in order to make the analysis understandable and to assure its accuracy even for large systems. Information flow is modeled based on Myers’ and 14 Computer Security Applications Conference, ACSAC 2001 Proceedings 17th Annual 10-14, Dec. 2001, Page 45 – 54 Liskov’s15 decentralized label model combining label-based read access policy models and declassification of information with static analysis. 2. Developing Secure Networked Web-Based Systems Using Model-based Risk Assessment and UMLsec, Siv Hilde Houmb / Jan J? urjens, 200316 Despite a growing awareness of security issues in networked computing systems, most development processes used today still do not take security aspects into account. This paper shows a process for developing secure networked systems based on CORAS framework1718, whose concept is a model based risk assessment using UMLsec.

UMLsec is an extension of the Unified Modeling Language (UML) for secures systems development. Enterprise information such as security policies, business goals, policies and processes are supported through activities in a model-based integrated development process. Security requirements at a more technical level can be expressed using UMLsec. Additionally, a support-tool for a mechanical analysis of such requirements is provided. 15 Decentralized Model for Information Control Flow In Proc. 16th ACM Symposium on Operating Systems Principles, A. C. Myers and B.

Liskov. A, Saint-Malo, France, 1997 Software Engineering Conference, Tenth Asia-Pacific 2003, 2003, Page 488 – 497 16 17 Towards a UML pro? le for model-based risk assessment, S. -H. Houmb, F. den Braber, M. S. Lund, and K. Stolen, In J? urjens et al. Business Component-Based Software Engineering, chapter Modelbased Risk Assessment in a Component-Based Software Engineering Process, K. Stolen, F. den Braber, T. Dimitrakos, R. Fredriksen, B. Gran, S. Houmb, Y. Stamatiou, and J. Aagedal, The CORAS Approach to Identify Security Risks, Kluwer, 2002, Pages 189–207 8 8 3. Model-based Risk Assessment to Improve Enterprise Security, Jan Oyvind Aagedal / Folker den Braber / Theo Dimitrakos§ / Bjorn Axel Gran / Dimitris Raptis‡ / Ketil Stolen, 200219 This paper attempts to define the required models for a model-based approach to risk assessment. CORAS is applied to provide methods and tools for precise, unambiguous, and efficient risk assessment of security critical systems since traditional risk assessment is performed without any formal description of the target of evaluation or results of the risk assessment.

CORAS provides a set of models to describe the target of assessment at the right level of abstraction, and medium for communication between different groups of stakeholders involved in a risk assessment. In one step of the risk treatment, a strengthening of the security requirements is suggested to handle identified security problems. In addition, many risk assessment methodologies are presented, such as HazOP20 and FMEA21. HazOP is applied to address security threats involved in a system, and FMEA is applied to identify potential failure in the system’s structure.

All components in the system’s structure are expressed by UML. Many approaches are taken into account in developing systems. The following table is a brief comparison of the approach in this thesis to the three aforementioned approaches. 19 Model-based Risk Assessment to Improve Enterprise Security, Enterprise Distributed Object Computing Conference, 2002. EDOC ’02. Proceedings. Sixth International, Sept. 2002, Page 51 – 62 Security Assessments of Safety Critical Systems Using HAZOPs, R. Winther, O. A. Johnsen, and B. A.

Gran, 20th International Conference on Computer Safety, Reliability and Security SAFECOMP 2001, Hungary, 2001 FMEA Risk Assessment, http://www. tangram. co. uk/TI-HSE-FMEA-Risk_Assessment. html 20 21 9 Title # 1 2 3 This Thesis Survey No No No Yes Risk Analysis Common Criteria22 CORAS framework Security Solution Information Flow UMLsec Design UML Tool Java Beans-based components MDR23 UML CASE tool Poseidon24 (Does not apply) N/A UPPAAL26 CORAS HazOp, FMEA Risk Assessment Information policy extending with DOD25 and its procedure standard (new) (new) UML UML

Table 3: Comparison of the Approach of This Thesis to Other Approaches All approaches are very useful and powerful in developing secure systems; however, the approach in this thesis may be well suited and used widely and easily for recent systems since the concept of this approach is very simple and its security mechanisms are based on countermeasures from current information leak problems. 2-2. Information Risk Management What is information risk management? According to an article 27 regarding information risk management provided by the Nomura Research Institution 28 , 22

Common Criteria for Information Technology Security Evaluation, I International Standard ISO/IEC, SO/IEC, 1998 Meta-Data Repository, MDR homepage, http://mdr. netbeans. org UML CASE tool Poseidon, Gentleware homepage, http://www. gentleware. com Department of Defense home page, http://www. defenselink. mil/ UPPAAL homepage, http://www. uppaal. com/ 23 24 25 26 27 Understanding Information Risk Management, Nomura Research Institution, 2002, 95 pages http://www. nri. co. jp/opinion/chitekishisan/2002/pdf/cs20020910. pdf Nomura Research Institution home page, http://www. nri. co. jp/english/index. tml 28 10 information risk management are those policies which reduce risk inherent in information processing. As enterprises invest in information-oriented systems, databases such as customer information databases have resulted in enterprises maintaining and using greater quantities of information. Poor information management may result in information leaks and hacker attacks, resulting in considerable loss and damage to the enterprises. For example, the information of 900,000 clients of Yahoo! BB in Softbank, which is one of the largest high-speed Internet connection services, was leaked by an x-employee’s misuse of its database system29. The leaked information almost spread to the Internet. The company paid $10 to each client as a self-imposed penalty for this lapse in security. Total financial losses reached $9 million. While insider culprits must pay for their crimes, the responsible company must create an environment where such events are defended against. The following figure represents information risk. The upper left graph in the figure shows the probability of risk occurrence in each given process; the bottom left graph shows the magnitude of assets involved in the information for each given process.

The information risk level is determined by the combination of the probability and assets. For instance, a combination of high probability of risk exposure and high asset value will be the highest risk level; on the other hand, a combination of low probability of risk exposure and low value of assets will be the lowest risk level. In the case of the aforementioned security breach, the probability of risk exposure is once every 3 years; this probability is low. However, the value of information assets is extremely high. Thus, the risk level will be middle or high.

The risk level may be between unacceptable risk and 29 The Japan Times; Softbank leak extortionist won’t serve time; July 10, 2004; http://search. japantimes. co. jp/print/news/nn07-2004/nn20040710a4. htm 11 undesirable risk. Probability of Risk Occurence Risk Level Unacceptable Risk (reduce risk through countermeasures) Risk Level Standard Business Process Treated Information Asset Business Process Business Process Figure 1: Depicting for Information Risk30 In response to emergent security problems, many government agencies in Japan and the United State now require information security management certification.

In Japan, ISMS (Information Security Management System) was issued last year31. It has become a standard certification for information risk management. In the United State, Congress passed the Federal Information Security Management Act of 2002 (FISMA)32, which provides the overall information risk management framework for ensuring the effectiveness of information security controls that support federal operations and assets33. 30 Understanding Information Risk Management, Nomura Research Institution, 2002, Page 94, http://www. nri. co. jp/opinion/chitekishisan/2002/pdf/cs20020910. df Information Security Management System (ISMS) home page, http://www. isms. jipdec. jp/ 31 32 Federal Information Security Management Act of 2002 (Title III of E-Gov), Computer Security Resource Center, http://csrc. nist. gov/policies/ Improving Oversight of Access to Federal Systems and Data by Contractors can Reduce Risk, Wanja Eric Naef, GAO, April 2005, 28 pages 33 12 Information risk management is required in developing systems. It has become the center of public attention in the last few years. As a result of this demand, many information risk management tools have been introduced.

Among these tools, RAPID and IRSP are notable. RAPID is useful for defining necessary business processes and developing guidelines to improve existing security systems. It will fit security systems into existing systems. The processes are 1) risk assessment, 2) security problem identification and awareness review, 3) security program creation and support. IRSP provides an information risk management program to support and improve client security systems. The program defines certain client security policies and provides static and dynamic protection. Protection schemes are as follows:

Static Protection A rule and definition of security standards, security architecture, security service, recovery interface, and other additional security needs based on countermeasures from personal, physical, administrative, communications, and technology. Dynamic Protection Security protection program involving vulnerability alert processes, vulnerability assessment processes, monitoring services, and anti-virus programs plan based on information security best practices. After analyzing the both types of protection, the program shows certain security compliance standards and specifications based on the static protection.

In addition, the 13 program assembles best protection practices and creates the proper program based on the dynamic protection. However, these tools do not provide any strict methodology or approach to develop new systems with information risk management policies. Certain system models or system development methodologies and approaches must be formally provided to remedy this deficiency. This deficiency in current information risk management tools is the motivation for this thesis. In conjunction with the concepts and ideas from the abovementioned tools and hree existing methodologies, a new system development approach and methodology for information risk management will be introduced in this thesis. An overview of the system development model is as follows. Chapter 3 Risk Assessment Security Mechanism 1 Security Mechanism 2 Unacceptable Undesireable Risk Risk Acceptable with review by management Acceptable without review Security Mechanism m Determine Target Risk to Solve Information Policy Procedure Information Policy 1 Chapter 4 Detemine Information Policy Information Policy n Information Policy 2

Figure 2: Overview of Information Risk Management Model 14 Risk management is used to identify risks involved in security breaches and prioritize security solutions for the risks in any field. This will be described in detail in chapter 3. Information policy consists of the rules developed for the target system in order to reduce or prevent the risk. Security mechanism is the procedures used to accomplish the information policy. Security mechanisms will be invoked in the subject system. This process will be shown in chapter 4. In this thesis, some examples following this cycle will be shown in detail. -3. Systems Modeling Modeling is a powerful technique to develop a system effectively and efficiently, and it offers many benefits to any participant of system development such as stakeholders, system developers, and users. According to Mark Austin’s lecture notes for the University of Maryland systems engineering program34, the benefit of using modeling are as follows. Assistance in Communication Assistance in Coordination of Activities Ease of Manipulation Efficient Trial-and-Error Experiments Reduction of Development Time Reduced Cost Risk Management 34

ENSE 622 Lecture notes, Mark Austin, University of Maryland, 2004, Page 78 – 79 15 The system model provides experiments, rules, and useful information for designing, developing, and implementing the system. By applying the model, cost, time, and risk will be dramatically reduced. For this reason, the modeling process will be the main concern regarding information risk management in systems engineering. 2-3-1. Meta Model What is the system model? How is it developed? Meta Model Architecture will be introduced in order to answer these questions. Meta modeling is generally described using a four-layer architecture.

These layers represent different levels of data and Meta-meta model meta-meta-metadata meta-meta-meta objects Metamodel meta-metadata meta-meta objects described Package M3 described MOF Model Class Association M2 described UML, IDL, XML Attribute Object Method Model metadata meta objects described M1 Application Model Teacher person Student Information data objects M0 Data Modeled Teacher Jose Milton Nori Mari Student Jon Carn metadata. Figure 1 shows an example of the layers used for modeling a target system. Figure 3: Meta Model Architecture35 35

Using Metamodels to Promote Data Integration in an e-Government Application Scenario, Adriana Figueiredo, Aqueo Kamada, IEEE, 2003, Page 4 16 The four layers are: Information: The information layer refers to actual instances of information. For example, there are two instances of data representing “Jose” and “Milton” as teachers, and four instances of data representing “Nori”, “Jon”, “Mari”, and “Carn”. Model: The model layer (also known as the metadata layer) defines the information layer, describing the format and semantics of the objects, and the relationship among the objects.

For example, the metadata specifies the “Person” class, and its instances, which are “Teacher” and “Student”. Relationships between objects are defined such as “Teach (Teacher Student)” and “Learn (Student Teacher)”. Metamodel: The metamodel layer (also known as the meta-metadata layer) defines the model layer, describing the structure and semantics of the model. For example, the meta-metadata specifies a system design that describes its structure and data-flow. The metamodel can also be thought of as a modeling language used to describe different kinds of systems.

Meta-metamodel: The meta-metamodel layer defines the metamodel layer, describing the structure and semantics of the meta-metadata. It is the modeling language that is used to define different kinds of metamodels. Typically, the metametamodel is defined by the system that supports the metamodeling environment. This thesis will focus on developing the M1 layer, which is a model involving meta-data and meta-objects to develop the system model targeting secured systems with information policy to prevent threats of security breach. 17 2-3-2. UML

UML is the one of standard metadata models and modeling language. A wide variety of object-modeling methodologies were developed during the 1980s, such as OMT, Booch, and OOSE. Although these modeling methodologies were similar, the language and notations used to represent them were different. Moreover, the visual modeling tools that implemented these modeling methodologies were not interoperable, and UML quickly become standard modeling language; it lets the modeling take higher level of abstraction so that the model can be updated easily and re-used for other systems.

UML is a standardized modeling language consisting of an integrated set of diagrams, developed to help system and software developers accomplish the following tasks36: Specification Visualization Architecture design Construction Simulation and Testing Documentation UML was originally developed with the idea of promoting communication and productivity among the developers of object-oriented systems. Currently, all of UML is updated for UML2. 037. UML2. 0 has resolved many of the shortcomings in the previous version UML, such as lack of diagram interchange capability, inadequate semantics 6 Excerpted UML 2 for Dummies, Michael Jesse Chonoles, James A. Schardt, July 2, 2003, Page 14,15 UML 2. 0, The Current Official Version, http://www. uml. org/#Articles 37 18 definition and alignment with MOF. UML2. 0 has the following features38: Improved semantics in the class diagrams Support for large systems and business modeling Diagram interchange capabilities Aligned with MOF (Meta Object Facility) and MDA (Model Driven Architecture) UML2. 0 architecture39 is as follows. Diagram Structural Diagram Object Diagram Behavioral Diagram

Activity Diagram Class Diagram Use-Case Diagram State-Machine Diagram Component Diagram Interaction Diagram Protocol State Machines Package Diagram InteractionOverview Diagram Sequence Diagram Timing Diagram Deployment Diagram Composite Structure Diagram Communication Diagram Figure 4: UML 2. 0 Architecture UML 2. 0 will be used since this meta-language is the most extensible and compatible with any system. 38 Excerpted UML 2. 0 in a Nutshell, Dan Pilone, Neil Pitman, Page 10 – 12 Excerpted UML 2. 0 in a Nutshell, Dan Pilone, Neil Pitman, Page 19 39 9 The system model development process flow using UML is presented as follows. Outline Requirements Develop Models Verification and Validation Figure 5: Process Flow to Develop the System Model First, outline requirements and rules for the system to prevent security breach are described. Second, the system model with usage of UML is developed. UML is the most powerful meta-language to design a system model. The language is understandable, easy to handle, and re-useable for other similar system models. Finally, the system model should be verified.

Many verification technologies have been presented recently. UPPALL is one of the powerful verification tools for a system. One scenario from the design will be selected, and it will be verified using UPPALL. Completing verification for the system model will not be performed since the purpose of this thesis is to introduce the system model development process concerning information risk management, not to complete the development of the system. Some sample designs and verifications of the particular scenario shown through this thesis will suffice.

The focus here will be on analyzing current security breaches and risk assessment, and developing information policies for preventing unacceptable security breaches. Completing the system development will be taken into account for future work. 20 3. Risk Assessment using the Current Security Issues Risk assessment is one of the main components of the information risk management model. In this chapter, common risk assessment concepts will be used for the model, and then improved to be suited for information risk management in this thesis.

Finally, the risk level of current security breaches will be determined and security breaches involving unacceptable risk will be addressed; in addition, some suggestions for preventing security breaches will be presented. 3-1. Risk Assessment Methodology Threat Asset Risk Vulnerability Adjusted Risk Figure 6: Information Risk Profile40 How is information risk levels measured for each security breach? How is the risk assessed? According to the “The Executive Guide to Information Security”41, the risk 40

Excerpted from The Executive Guide to Information Security, Mark Egan, Symantec Press, Nov 2004, figure 5-1, Page 109 The Executive Guide to Information Security, Mark Egan, Symantec Press, Nov 2004, Page 104 – 110 41 21 level is the set of assets in the organization and system, threat to the asset, and organization or system vulnerability. In this book, the risk level is assessed using the summary matrix involving a brief description of risk assessment measurements, which are the asset, threats, and vulnerability. The assessment is deployed for each set of the three risk assessment measurements.

However, it is not a quantitative method; this method results in imprecise assessment. The method should be more quantitative and provide more precise assessment so as to apply the information risk model in systems engineering. As the figure on the previous page shows, each risk assessment measurement may have certain assessment value determined by the system developer and stakeholder; the risk is a function of assets, threats, and vulnerabilities. The threat of the security problem, the vulnerability involved in the system and organization, and the assets in the system and organization should be assessed as opposed to only described.

The risk assessment process should be analytical since the information risk management must be a systematic process by which an organization identifies, reduces, and controls its potential risks and losses. At this point, the definition42 of each risk measurement may be shown in the following table. The capacity and intention of an adversary to undertake actions that is detrimental to an organization’s interests. It cannot be controlled by the owner or user. The threat may be encouraged by vulnerability in an asset or discouraged by an owner’s countermeasures.

Any weakness in an asset or countermeasure that can be exploited by an adversary or competitor to cause damage to an organization’s interests. Anything of value (people, information, hardware, software, facilities, reputation, activities, and operations). The more critical the asset is to an organization accomplishing its mission, the greater the effect of its damage or destruction. Threat Vulnerability Asset Table 4: Terms for Risk Measurement 42 National Infrastructure Protection Center; Risk Management: An Essential Guide to Protecting Critical Assets; November 2002, Page 8 – 9 22

A new risk assessment process to suit information risk management in systems engineering will be shown in this thesis. Each risk assessment measurement is determined as follows: – The asset assessment: The magnitude and effect of the potential loss in systems and organization (What is the likely effect if an identified asset when it is lost or harmed by one of the identified unwanted events? ) – The threat assessment: The probability of loss in systems and organization (How likely is it that an adversary can and will attack those identified assets? ) – The vulnerability assessment: The magnitude of the exploitable situations. What are the most likely vulnerabilities that the adversary will use to target the identified assets? ) Developers, including systems engineers, analysts, and security managers should identify and evaluate the value for each risk assessment measurement. The magnitude is measured by verbal ratings such as high, middle, and low. The risk assessment steps are shown here: Step 1. Asset Assessment: Identify and focus confidential information involved in organization and system process. The assets include customer information, business and technology know-how, government secret information, and home security information.

For each individual asset, identify undesirable events and the effect that the loss, damage, or destruction of that asset would have on the organization and system process. Step 2. Threat Assessment: Focus on the adversaries or events that can affect the identified assets. Common types of 23 adversaries include criminals, business competitors, hackers, and foreign intelligence services. Certain natural disasters and accidents are taken into account even though they are not intentional. Step 3. Vulnerability Assessment: Identify and characterize vulnerabilities related to specific assets or undesirable events.

Look for exploitable situations created by lack of adequate security, personal behavior, lack of information management, maltreated privilege documents, and insufficient security procedures. Typical vulnerabilities include the absence of guards, poor access controls, lack of stringent process and software, and unscreened visitors in secure areas. Step 4. Risk Assessment: Combine and evaluate the former assessments in order to give a complete picture of the risk to an asset of confidential information in organization and system process.

The risk is assessed in terms of how each of these ratings (high, middle, low) interacts to arrive at a level of risk for each asset. The terms used in the rating may be imprecise. In situations where more precision is desired, a numerical rating on a 1 to 10 scale can be used. The numerical scale is easier for systems analysts and developers to replicate and combine in an assessment with other scales. How each risk assessment is evaluated has already been presented. For the next procedure, risk level for an asset will be required.

How can the risk level be assessed? For 24 this question, the DOD 43 (Department of Defense) standard definitions 44 for the probability that an undesired event will occur and the severity level are used since the definitions have been adapted for many companies; moreover, the definition is the United State government standard. It may be required for any government information systems. The definition is shown in the following table. Probability Level A: Frequent B: Probable C: Occasional D: Remote E: Improbable Specific Event Likely to occur frequently Will occur everal times Likely to occur sometime Unlikely but possible to occur So unlikely it can be assumed occurrence may not be experienced Table 5: Probability Levels of an Undesired Event Severity Level I: Catastrophic II: Critical III: Marginal IV: Negligible Characteristics Death, system loss or severe environment damage Severe injury, severe occupational illness, major system or environment damage Minor injury, minor occupational illness, or minor system or environmental damage Less than minor injury, occupational illness, or less than minor system or environmental damage

Table 6: Severity Levels of Undesired Event Consequences This process results in a matrix that pairs and ranks the most important assets with the threat scenarios most likely to occur. The risk level will be determined by the following matrix on the next page. 43 Department of Defense home page, http://www. defenselink. mil/ 44 Combating Terrorism Threat and Risk Assessment Can Help Prioritize and Target Program Investments, GAO. April 1998, Page 7 25 Probability of occurrence A. Frequent B. Probable C. Occasional D.

Remote E. Improbable Severity level I. Catastrophic IA II. Critical II A II B II C II D II E III. Marginal III A III B III C III D III E IV. Negligible IV A IV B IV C IV D IV E IB IC ID IE Risk Level 1: Unacceptable (reduce risk through countermeasures) Risk Level 2: Undesirable (management decision required) Risk Level 3: Acceptable with review by management Risk Level 4: Acceptable without review Table 7: Risk Assessment Matrix45 This is the risk assessment definition of DOD widely used for many companies.

The definition should be modified to suit the information risk assessment. The probability of occurrence is useful for the information risk assessment as well. However, the definition of severity level should be modified as follows since the information risk assessment deals only with the information risk such as leaking confidential information and privilege documents, and misuse of technical know-how and home security information.

Security Level I: Catastrophic II: Critical III: Marginal IV: Negligible Characteristics Enormous number of secret information, severe potential to misuse and result in severe environment damage Secret information for particular area and fields, high potential to misuse for only limited area, major system or environment damage Confidential information, low potential to misuse, mi minor system or environmental damage Less than minor and unclassified information injury, less than minor system or environmental damage Table 8: Security Levels of Undesired Event for an Asset in Information Risk Assessment 5 Combating Terrorism Threat and Risk Assessment Can Help Prioritize and Target Program Investments, GAO. April 1998, Page 8 26 To asset information risk management, security level will be used instead of severity level. The risk assessment matrix can be used for the information risk assessment since it has been accepted by many companies; moreover, the matrix is still useful for the information risk assessment. The probability of the unwanted event occurrence clearly increases with increasing threat and increasing vulnerability.

In this thesis, the simple formula for the probability over a given time interval is: Threat * Vulnerability Each assessment measurement of the threat and vulnerability is shown by a numerical rating (1 to 10). The threat and vulnerability rating will be shown in the section of each assessment. The following matrix is used to determine the probability of the unwanted event occurrence with the numerical rating. The probability of occurrence A. Frequent B. Probable C. Occasional D. Remote E. Improbable Numerical rating for threat and vulnerability 81 or more 61 – 80 41 – 60 21 – 40 20 or less

Table 9: Rating for the Probability of Occurrence A security level rating corresponds to an asset raging for the confidential information in organization and system process based on the following matrix. 27 Security Level I: Catastrophic II: Critical III: Marginal IV: Negligible Numerical rating for asset 10 7–9 4–6 1–3 Table 10: Rating for the Security Level Step 5. Identification of Countermeasure Options: Provide the risk acceptance authority with countermeasures, or group of countermeasures, which will lower the overall risk to the asset at an acceptable level.

By evaluating the effectiveness of possible countermeasures against specific adversaries, the systems engineer can determine the most cost-effective options. In presenting countermeasures to the risk acceptance authority, the systems engineer or security analyst should provide at least two countermeasure packages as options. Each option should also include the expected costs and amount of risk that the decision-maker would accept by selecting a particular option. The graphical representation for the information risk assessment is shown as follows. 28 Information Policy Information Risk Assessment

Countermeasure options Information Risk Assessment Matrix Asset Assessment Threat Assessment Vulnerability Assessment Figure 7: Structure of Workflow for Information Policy Setting In this report, information policy will be used for the security requirement of the case study. The information policy will be formatted based on assessment of asset, threat, and vulnera

Categories
Free Essays

Ethical Issues with the Software Piracy Issue

Computer ethics deals with moral responsibility of what is wrong and right. Based on ? Importance of Computer Ethics and Software Piracy? article, software piracy is copying, distributing, and using software or games without paying. Software Piracy is a form of ethical issue that is hard to solve in society, especially among students of Faculty of Computer Science University of Indonesia. Based on writer observation in campus, students are still using pirated software, including using, duplicating, and distributing it to their friends.

This attitude of course violates developer’s intellectual property. The article also mentions about intellectual property and penalties for those who violate computer ethics laws. Intellectual property is including images, patents, procedures, videos, audios, and drawings. For those who violate someone’s intellectual property will be given penalties—paying hefty fines to extensive prison time. But even so, it seems the penalties are still blurring for students. Nowadays, information technology has widely grow and used by human.

Computer technology, both hardware and software has been widely approved as an intellectual property. The fast growth of technology innovation, especially software, is open for public and can be easily accessed by public via internet. It is the same for software piracy. Serial key, hack-version, and more other ways are easily accessible and widely available. Based on ? Ethical Issues in Software Piracy? article, someone should have a moral responsibility in using software. So, from internal-self of user should be aware of someone’s intellectual property.

W. D. Ross stated ? The Right and the Good? as our guideline to prove our moral responsibility toward software and/or its developer. Software piracy would cause loss of revenue for the developer. Thus, it will decrease developer’s motivation in designing new software. Impacts of software piracy explained above are mostly occurred because of human and economic factor. Based on the article, software piracy occurred mostly in developing-countries; because of their low economics (from GDP per capita), they find it harder to purchase software.

Indonesia is one of developing-countries, so it can be concluded roughly that Indonesian people hard to pay for software. In smaller scale, Indonesian social levels have a representative number of users in technology. Social level is about divided in three level; low, medium, and high class. In Faculty of Computers and Society, students also varied in social level or economic level. Some students have Iphone, Windows Phone, or tablets, but some don’t. In general, there is no difference among those levels. Every student is using technology. But in majority, students didn’t put much attention toward software piracy issues.

Those who have laptops may prefer using unlicensed operating system than using open source operating system. It is a form of software piracy—using without paying. In addition, current status of our community is still far from the word ? ethical?. We have not appreciating others’ property as well as we did to ourselves yet. The rule in ? Kode Etik Mahasiswa Fasilkom? , point two stated ? …including appreciates intellectual property?. Students of Faculty of Computer Science already know about this rule. In fact, it’s not the same as in the implementation, ignorance being a common habit.

Ignorance regarding unlicensed software caused software piracy. Majority tend to have neither attention nor self-control in using unlicensed software. Some may didn’t know that it is unethical. But some maybe already know that what he/she doing is wrong, but even so he/she is still doing it just because everybody—community—is doing it. Our community is affecting us. A student may be an example for his/her friends or his/her community. He/she may use unlicensed software that is followed by others. This ignorance habit can damage our own personal ethical which embedded in our heart.

Furthermore, we start believing that our wrong-doing is right. In analyzing software piracy, writer think students should have an awareness and moral responsibility. A developer of software may not know that his/her intellectual property was just being used irresponsibly. Student of Faculty of Computer Science should have known how hard it is to make software. They should have aware how long time needed, how many resources sacrificed by the developer to develop software. In student’s point of view, they need it but they don’t want to give more when people are not giving anything.

For example, an antivirus should be bought for some prices, but some students found that there is a forever-renew-trial of the antivirus, so that they don’t have to pay. Along with economic principle, ? with less effort, can gain more? , we don’t want to sacrifice more than others. It became a serious problem. As a conclusion, how to overcome this issue? It is a professional standard, based on the article; Association for Computing Machinery (ACM) stated that any person who wants to join the ACM should accept ? Code of Ethics and Professional Conduct? which covers the ethical issues surrounding oftware piracy. Writer think we can do as ACM do. Article entitled ? The Rules? also stated that computer artifact—both software and hardware—has rules for both its developer and user, so that they will have morally ethical in developing or using software. It has seven rules which allow and avoid both developer and user to do something about the computer artifact. These rules should be well-applied as a solution for software piracy issue. In the top of those solutions, human factor is the main factor that we should pay more attention. References: 1. K. W.

Miller, Moral Responsibility for Computing Artifacts: ? The Rules?. Illinois: IEEE, 2011. 2. Unknown. (2011). Kode Etik Mahasiswa Fakultas Ilmu Komputer Universitas Indonesia [Online]. Available: http://scele. cs. ui. ac. id/file. php/1434/Kode_Etik_Mhsw_Fasilkom. pdf 3. Thurlow, Max. Ethical Issue in Software Piracy [Online]. Available: http://www. ehow. com/list_6669954_ethical-issues-software-piracy. html 4. Boone, Kevin, Importance of Computer Ethics and Software Piracy [Online]. Available: http://www. ehow. com/facts_5766300_importance-computer-ethics-software-piracy. html

Categories
Free Essays

Open Source Software

The open source software filling with innovation and vitality 1. Introduction With computer’s development, the software of the computer is more powerful. The software can be divided into two parts which are free and fee-based, and also can be classified with the closed software and the open source software. In order to understand the features of the open source software, and know the reason why the software can be filled with vitality and innovation, which attract a lot of technicists devoting themselves.

This paper will give a deep analysis of the open source software in nearly all ranks. This essay mainly includes four parts. Firstly, giving an introduction to the definition of the open innovation and the open source movement and what is the Linux and the history and development of the Linux. Secondly, enumerating some wide use of the open source softwares and taking Linux for example to analyze strengths of open and innovation source software. Thirdly, the author will discuss the challenges and the future of the open source and innovation software.

Finally, the author makes a conclusion for the value of the open source and the open innovation. After reading this essay, it’s my target to make you have a better and deep understanding of the concept of open source and open innovation, attempt to make use of the open source software such as Linux operating system, realize the great value of the open source and open innovation, and also be aware of some challenges of the open source and open innovation as well as its future. 2.

Open innovation,open source,history of Unix and Linux Henry Chesbrough who is a professor and executive director create the term of open innovation, in his book Open Innovation: The new imperative for creating and profiting from technology, though the idea and discussion about some consequences (especially the inter firm cooperation in R&D) date as far back as the 60s (Chesbrough, 2003). With the development of technology and knowledge, new creation of the products begins to face challenges. In order to create new alue, we must established extensive connection with the outside world widely, realize the complementary advantages in the knowledge dissemination and sharing to speed up inner innovation. In software, for example, companies such as SAP and Microsoft have started to build research labs on university all over the world to improve the integration of outside-in innovation to create new commercial benefit. Even Apple that is so strong no matter from any ways had to open up its proprietary technology to appeal to the high-tech users.

There are some outstanding examples in the electronic industry which are Philips’ open innovation park, Xerox’s Palo Alto Research Center, Siemens’ open innovation program and IBM’s open source initiatives. Today, open innovation has been driven by many computer software suppliers on a strategic level. Nowdays the open source software that needs more creative can appeal to people to use. Due to open innovation, we can concentrate the inner and outer power on developing the creation or innovation.

The open source movement is a profound movement of individuals who support the use of open source licenses for some or all software. Open source software is made available for anybody to use or modify, as its source code is made available. Some open-source software is based on a share-alike principle, whereby users are free to pass on the software subject to the rule that any enhancements or changes are just as freely available to the public, while other open-source projects may be freely incorporated into any derivative work, open-source or proprietary(Eu. conecta, 2011) .

The open source software allows users to use some or even all software by giving them authorization. What is more, sometimes, the source code is also available to users. That makes it possible for users to read and modify the source code. Usually, any individual can changes and modify the code and make available to public, and other users can download his/her code, read his/her code, discuss with the writer, and enhance the code. By doing so, the source code can be optimized and consequently the software will be more powerful and a stabilized system because of some users’ creative ideas and critical thinking.

The open source is a profound revolution by taking the advantage of users’ participation(Eu. conecta, 2011). Nowdays the security of software is more and more concerned. Though the open source, we can solve the problem of security by a mass of programmers. With the open innovation and open source movement developed, there are a vast of professional and perfect softwares happened. For example, Linux and Unix, Unix operating system that many of the cooperative programmers make efforts in the 1970s is the most successful program that could run on different computer device.

A free version of Unix operating system can be attempted to build by the developers in 1986. There is a project called GNU that stand for “Gnu’s not Unix” allowing programmers to contribute to the development effort regardless of individual or commercial interests. The most important thing is that the operating system is free for user. The GNU is very famous as a copyleft agreement including four points. The first is that software can be copied and distributed under the GNU license. The second is that products that are obtained and distributed under this license may be sold.

The third is that user can alter the source code, but if they want to distribute or publish the source code, they must make the software work under the GNU license. The third is that without a GNU license, the source code could not work , through an individual can modify, distribute and publish the source code. The fourth is that we can develop the assistant technology for the open source software which don’t include core licensed under the GNU license. It’s not published as the Linux kernel created by a young student who is called Linus Torvalds until in1991.

He gave programmers his code so they can contribute to revise and develop the code. Linux becomes into an advanced and powerful operating system, because a mass of programmers analyze the code and write development that Linus included into Linux. With the Linux improved rapidly, there are a series of versions of Linux delivered that can meet different needs. 3. The strengths of open and innovation source software In this section, the author will talk about the wide use of open and innovation source software and its strengths and profound impact to modern society and some traditional fee-based software.

It is hard to realize something until we stop for a while and look at how different it is from the software that we know. The open and innovation source software have some incomparable advantages that conventional paid software can’t match. The author will take Linux for example to discuss the strengths and impact of open and innovation source software. Firstly, the open source movement is the collaborative nature that allows smaller companies to take part in the global economy. The smaller companies or individuals can have rights to access to create, organize, or distribute the software.

It’s an equal opportunities for people around the world to participate in the movement. So the movement has attracted more and more involved. There are over 120,000 programmers all over the world who are distributed internationally and support Linux as a means of reducing the large companies’ technical domination (Ceraso, A. , & Pruchnic, J,2007) . It is computed that only 5-10 percent of code of the Linux kernel remains compiled by Linus Torvalds. The collaborative nature create the culture of sharing, which is pervasive in the programming project.

Programmers in those project help each other, make progress together to complete the programming. Secondly, the creation of open source software is not individual so that we can reduce the cost. The Research and Development of Linux operating system is made efforts by volunteer labor that is worth about two billion dollars (Kusnetsky and Greg , 1999). Companies like Microsoft that develop the windows operating system spend about $80-100 million per year. Technologists are addicted to the code of Linux programming due to their hobbies or personal interest.

The programmers don not care the money and time so that they can devote themselves into it with their professional responsibilities. Individuals who have keen interest in coding and software creation or distribution promote the development of the open source software, which is not different from the proprietary software which is motivated by the money gain. Sometimes the developers want to get satisfaction and a reputation from other programmers by contributing to the open source code. Others want to receive such as good job offers, shares in programming values and so on.

Thirdly, system administrator in the development of open source software will have control in the risk of deploying the tool. It is similar to in a corporate organization that Linux has a leadership structure. Linus Torvalds is a header in the Linux community who is a respected manager that can control the progress of programming, and his thoughts are considered final. Torvalds can appoint some programmers to be responsible for managing specific part of the project, and in reverse the programmers can guide other coordinators.

However, this leadership structure only is suited to the Linux kernel; it does not apply to program such as system utilities. Fourthly, there is a major advantage in the open source code that is the ability for a mass of various people to edit and fix problems and errors that have occurred. The advantage is that programmers make improvements to open source software and will give meaningful feedback to the original programmer. The feedback benefits the entire project. Because of the feedback, the open source softwares become more and more powerful, riskless, high-quality.

Fifthly, open source programs divide into small teams of programmers that work independently to settle specific problems. Those teams are parallel development that can make it possible for 435 Linux projects to be underway (Sullivan, 2011). Parallel debugging can improve the efficiency of individuals working on the project. Parallel debugging can feed back quicker modifications than traditional development. For example, Linux is attacked by the TearDrop IP because of some bugs, but Linux programmers repair it in less than 24 hours (Sullivan, 2011). Sixthly, open source software has the feature of the long-term sustainability.

The open source software is different from the proprietary softwares, which is not driven out of business in the short-term. The open source software will be still developed all the time so long as the programmer can keep the sufficient interest and skills, even the user always has the choice to work in the house, maintain the running of the software and support to continue the programming of the software. It can have been seen the strengths of open and innovation source software. Better quality, higher reliability, more flexibility, lower cost, and an end to predatory vendor lock-in are the targets of the open source software.

It is important to maintain the open source definition that creates a trusted group that connects all users and developers together. Just because of the strengths of open and innovation source software, expect for the Linux or Unix operating system, there are some other excellent softwares such as apache that is successful server software and scripting programming language on the web, Mozilla that is a excellent web browser like IE and mysql that is very popular database management system, giving people different experience and meeting the diversity of requirements. These softwares also have been extremely successful. . Challenges of open source and innovation software Even though the open source and innovation softwares have gained great success in many areas, but there are still challenges facing in front. One challenge is the quality of the open source and innovation software. Previous research has shown that the size of software module have a certain relationship with software defects. For example some scholars think that there is an U-shaped relationship between software modules and software defect, therefore, the software module wants to be moderate, too big or too small scale will lead to defect increased.

Despite criticism of scholars views, but the size of the software modules should be paid attention to control In the software development to ensure software quality,which recognized by most of scholars. However, the above conclusion is based on the non-open source software, and is not suitable for quality control of the open source software, because the module of the open source software usually is constantly changing the old module, adding a new module or deleting constantly in the process of evolution of software development.

Koru AG, Zhang Dongsovg, LiuHongfang take Mozilla for example to prove that there is a relationship between the size of of module of the open source software and the defects of product, and the results still show that l with software quality has a direct relationship with the scale of the software. Although there are a large number of experts auditing the open source software quality on the surface, but in fact, there are a small number of relatively fixed experts auditing the quality periodically,even some softwares don’t have a quality audit, which is one of reasons that the quality of open-source software is doubted.

Another risk is the legal risk of business development using open source. First, there is a problem that copyright of Open source software is unknown. Open source software often has a very complex origin because of its special patterns of development. Open source software has a massive pool of programming expertise all over the world to develop. For example, there may be a few people, more dozens, or hundreds of people contributing to the programming of open source software. Over one thousand people are needed in large projects such as Linux operating system (Langley, 2007).

But the participants have a variety of backgrounds, as a result, It is difficult to ensure that their codes don’t have problems. This kind of confusion property rights of knowledge and complex situation makes a lot of open source software have a risk of infringement all the time. For example, Linux was suspected of violating the copyright of Unix programming code belongs to the company of SCO. For the problems of infringement ,license of open can’t provide any special terms or other promises to ensure that the programmers who devote to projects don’t violate the intellectual property of others.

Due to these licenses that do not provide any liability protection, there is a risk of open source software for commercial development. Second, There are other problem such as the infringement of patent rigth and trademark right that we should pay attention on. 5. Commercial value and trend of open source and innovation software In this part, we will discuss the open source software’s economic value and predict its future. On one hand, a lot of commercial software companies are always paying attention to the rapid development of open source software.

Due to the massive excellent open source software projects and high quality codes, if the resources can be directly used, business company can save a lot of cost. The company of Business software develops commercial software based on the open source codes, which is flexible to decide based-fee or free when it is used for commercial purposes. For individuals, it is still free to use the commercial software, of course, Except commercial purposes, at the same time, these companies can provide technical service’s support.

This flexible business model is more and more welcomed by more and more business software company. On the other hand, launch of android makes people be full of expectation for the open source software. Because of the open source software portability, custom pattern of Linux is a breakthrough of the industry model. Custom pattern of Linux because of the current Linux operating system that has deficiencies at performance, stability, and grasping the user needs ,can meet the user’s requirements. Though we discuss the commercial value of open source software, we can predict the trends of the open source software.

In the future, the suppliers of open source software will have a closer relationship among themselves. Due to the development of the industry, the model of self-reclusive development do not clearly conform to the trend of the times, high requirements of the software in the future promote manufacturers of open source software Seeking a deeper cooperation among themselves, which provides effective means to compete with the large proprietary software vendors for the supplier of open source software. The overall open source solutions are easier than traditional solutions on the deployment and maintenance.

From hardware to software infrastructure, enterprise customers will know and experience the effective cost of overall open source solutions. Grasping the good opportunity of open source software will have a better future. 6 . Conclusion Now the software’s feature is more complex and the work of software’s programming is more lager, the open source software occupy a lot of advantages such as its collaborative nature which can offer customizability and, as a result, promotes the adoption of its products, the open innovation which is not proprietary, resulting in lower costs and so on.

Now software is built more complicated and the work of programming is more lager than ever. Because the open source software is not proprietary, that means taking the advantage of collaboration and customizability could lower the cost and make the software system stronger. From the introduction above, It can have been already cleared the definition of open innovation, the open source movement and the history of Unix, Linux operating system. Though the deeper analysis, it can be known the reason why the open source softwares become more and more popular is that powerful strength of open source software.

Just because of this, there are some excellent software such as Linux, apache and mysql giving people different experience and meeting the diversity of requirements. These softwares also have been extremely successful. For now the use of open source software in some areas is limited, but its value will dig out driven by the maturity of technology and idea. However, there are also challenges. One difficulty is the promise of the quality of the open source software. And another difficulty is the legal risk of business development using open source.

Notwithstanding the challenges, the quality of massive open source software could prove invaluable over time. From the author’s perspective, driven by the portable, the open source software will have much huger development space, play more important role in some developed areas and show unimaginative value in some undeveloped areas as well. References A. Carleton, et al. (1992), “Software Measurement for DoD Systems: Recommendations for Initial Core Measures,” Software Engineering Institute, CMU/SEI-92-TR-19. B. Curtis, H. Krasner, and N.

Iscoe (1988), “A Field Study of the Software Design Process for Large Systems,” Communications of the ACM, vol. 31, no. 11, pp. 1268-1287. B Littlewood and D Miller (1989), “Conceptual Modeling of Coincident Failures in Multi-Version Software”, IEEE Transactions on Software Engineering, vol. 15, no. 12, pp. 1596-1614. B. Perens (1999), “The Open Source Definition,” in Open Sources: Voices from the Open Source Revolution , C. DiBona, S. Ockman, and M. Stone, Eds. Sebastopol, CA: O’Reilly, pp. 171-188. Chesbrough, H. W. (2003). Open Innovation: The new imperative for creating and profiting from technology.

Boston: Harvard Business School Press. Definition of Open Source: Open Source Initiative Retrieved . Kusnetsky, Dan, IDC, and Greg Weiss, DH Browen (1999), Linux E-Seminar M. Krochmal (1999), “Linux Interest Expanding,” in TechWeb, at http://www. techweb. com/wire/story/TWB19990521S0021 Norman Fenton (1994), “Software Measurement: A Necessary Scientific Basis,” IEEE Transactions on Software Engineering, vol. 20, no. 3, pp. 199-206 Pearce, J. M. (2012). “The case for open source appropriate technology”. Environment, Development and Sustainability 14 (3): pp. 425–431. P.

Vixie (1999), “Software Engineering,” in Open Sources: Voices from the Open Source Revolution , C. DiBona, S. Ockman, and M. Stone, Eds. Sebastopol, CA: O’Reilly, pp. 91-100.. R. T. Fielding (1999), “Shared Leadership in the Apache Project,” Communications of the ACM, vol. 42, no. 4, pp. 42-43. Valloppillil, Vinod, and Josh Cohen (1998), Microsoft, ”Linux OS Competitive Analysis,” Halloween 2. White, Wallker (2000),”Observations, Considerations, and Directions,” Oracle, Federick Brooks in “The Mythicak Man Month. ” Zhao, L. ; Deek, F. P. (2004). “User Collaboration in Open Source Software Development”. Electronic Markets 14 (2): pp. 89.

Categories
Free Essays

Report on Two Software Programs with Business Application

Recommendation Report In this report, you are going to find two software programs with business applications and compare them according to four well-defined criteria. You will then recommend one program over another on the basis of your comparison. Your report will be written to me – assume I’m your boss, we work together in a small company, and I’ve asked you to find the best program for our purposes. The choice of programs is up to you, but you must choose two programs that do roughly the same thing. Don’t choose a tax program and a spreadsheet, for example.

You then need to think of the criteria you are going to use to make the comparison. Your choice of criteria is very important. This forms the basis for your comparison; if you don’t choose concrete, specific, and relevant criteria that allow you to make a detailed comparison of the two programs, your comparison is not going to be informative or meaningful. Also, one of the criteria you choose must be the cost of the program. This will, obviously, be a very simple point of comparison. Your recommendation report will have three sections: The Introduction will give a short introduction to the two programs you’ve chosen to discuss – tell us the name of the software, who manufactures it, etc. You are also going to inform me as to the criteria you choose to use to make the comparison, and why you choose those criteria. – The Findings section will compare the two programs according to cost and the three other criteria you’ve chosen. The format you choose for this section is up to you, but the comparison should be easy to process visually. This will be the longest section of your report. The Recommendation In this section you will describe why one of the programs you have discussed in the Findings section is better than the other one. You need to make clear reference to what you’ve discovered in the Findings section in order to fully justify your recommendation. | Unsatisfactory| Needs Work| Satisfactory| Exemplary| Mark| Introduction| – Introduction absent, or one component absent or completely inadequate. | – Outline of programs too general. – Introduction of four criteria lacks specifics, no justification. -Clear and concise outline of two programs-Clear, concise introduction of four criteria. | – Clear, concise and detailed outline of two programs-Clear, concise and detailed introduction of four criteria. | /10| Findings| -Many details absent or vague. -Criteria make meaningful comparison impossible. -Document is a mess. | -Details are clearly absent or vague. -Criteria are flawed in some way which renders the comparison somewhat ineffective. -Organization detracts from ease of visual processing, parallel form mistakes. – Programs compared with a reasonable amount of detail. – Criteria relatively well-chosen. – Organization does not interfere with visual processing, parallel form used. | – Programs compared fully according to cost and three other criteria. -Criteria are well-chosen and work to fully illuminate the two programs. -Organization allows for ease of visual processing, parallel form used. | /25| Recommenda-tion| – Reasons not clear or detailed, section too short. | – Gives general, somewhat vague reasons why one program was chosen and the other was not. – Gives clear and detailed reasons why one program was chosen and the other was not. | – Gives specific, clear and detailed reasons why one program was chosen and the other was not. | /10| Grammar and Style| – Errors, major and minor, pervasive. – Subheadings not used. | – Two or three major errors. – More than five minor errors. – Subheadings used. | -One or two major errors. – Three to five minor errors. – Subheadings used. | – No major errors. – One or two minor errors. – Subheadings used. | /15|

Categories
Free Essays

Two Types of Computer Software

COMPUTER SOFTWARE INTRODUCTION OF COMPUTER SOFTWARE Computer software or just call as software is about any set of machine –readable instruction that directs a computer precessorto perform spesific operations. One common way of dicribing hardware and software is to say that software can be thought of as the varible part of a computer and hardware as the invariable part. Hardware and software require each other neither has any value without the other. Software is a genaral term. It can refer to all computer instuction in general or to any specific set of computer instuctions.

It is conclusive of both machine instruction that call the binary code that more human understand and source code that more human undestandable instructions that must be rendered into machine code by compiless or interoreters before being executed. On most computer plattforms software can be grouped into two broad categories. It is system software is the basic software needed for a computer to operate. The application software is all the software that uses the computer system to perform useful work beyond the operation of the computer itself.

Software refer to one or more computer and data held in storage of the computer. In the other words, software is a set of programs, procedures, algorithm and its documentation concerned with the operation of the fuction of program it implemens either by directly providing instruction to the digitals electronics or by serving as input to the another piece of software. The term was coined to contrast to the term hadware. In the contrast hardware , software ‘ cannot be touched. Software is also sometimes used in a more narrow sense meaning application software only.

Sometimes the terms includes data that has not traditionally being associated with computers, such as film, types and record . These are system software or operating system referred simply as the OS, application software and programming languages. Usually most of us interact with a computer using application software. * TYPES OF SOFTWARE APPLICATION SOFTWARE Application system includes a variety of programs that can be subidivided into ganeral-purpose and function-specific application categories. A normal user rarely gets to see the operating system or to work with it.

But all of us are familiar with application software which we must use to interact with a computer. Application softwares are used to improve our work ability. Different application softwares and system softwares are used in daily life. Some are productivity software, content software, assessment software, online software, drill and practice software, problem solving software, tutorials, multimedia softwares, stimulation, games, group ware, share ware, spy ware, free ware etc . Some softwares are used to produce and create documents and different presentations. In the application software have four type. It is general purpose, custom software, commercial off-the –shelf(COST) and open source-software. Genaral-purpose application programs are programs that perform common information processing jobs for end user. For example word processing, spreadsheet, database management, and graphics prgrams are popular with microcomputer user for home,education,business, scientific and many other purpose. Because they significantly increase the productivity of end user , they are sometimes known a productivity packages.

Other examples include web browsers, e-mail, and group ware, which help support communication among workgroups and teams. Costum software is an additional common way of classifying software is based on how the software was the developed. It is the term used to identity software application that are developed within an organization for used to identify software applicationthat are developed within an organization for use by that organization. In other words, the organization that writes the program code is also the organization that use the final software organization.

Software that is developed for a specific user or organization is custom software. Since it is built for a specific user, its specifications and features are in accordance with the user’s needs. Commercial off-the –shelf(COST). It is developed with the intention of selling the softwarein multiple copies and usually for a profit. In this case, the organization that writes the software is not the intended target audience for its use. Several characteristics are important when describing COST software.

As opposed to custom software, off-the-shelf software is standard software bought off the shelf. It has predefined specifications that may or may not cater to any specific user’s requirements. When you buy it, you agree to its license agreement. First as stated in our definition, COST software products are sold in many copies with minimal changes beyond scheduled upgrade release. Purchaser of COST software generally not control over the specification,schedule, evolution, or access to either the source code or internal documentation.

A COST product is sold, leased, or licennsed to the genaral public, but in virtually all cases, the vendor of the product retains the intellectual prperty rights of the software. Costum software, in contrast,is gerally owned by the organization that developed it, and the specifications, functionality, and awnershipof the final product are controlled or retained by the developing organization. Open- source software is the newest innovation in software development. In this approach, developers collaborate on the development of an application usingprogramming standards that allow for everyone to contribute to the software.

Futhermore , as each developer completes his or her project, the code for the application becomes the available and free to anyone else who wishes to use it. Open source software is available in its source code form and the rights to change, improve and sometimes distribute its code are given under a software license. Software developed by an individual or an organization, where the source code is closed from public (not available openly) is referred to as closed source software. SYSTEM SOFTWARE System software consist of program that manage and support a computer system and its information processing activities.

For example. Operating systemand network management programs serve as a vital software interface between computer networks and hardware and the application programs of the ends users. System software is the backbone of any computer. It consists of all the files and programs that work to make your computer operate as a computer. System software is automatically provided when you purchase a computer on the high street, and is installed along with the operating system. Providers of system software include Windows and Apple Mac.

These offer regular updates which can be installed for free as they become available. Examples of system software include assemblers, system utilities, tools and debuggers. We can group system software into two major categories. System management programs. Program that manage the hardware, software network,and data resources of computer system during the execution of the various information processing jobs of users, example of important system management programs are operating systems,network managemant programs, database management systems and system utilities.

System development programs. It is program that help user develop information system program and procedur and prepare user programs for computer processing. Major software development programs are programming langguage translators and aditors ,and variety of CASE and other programming tools. PICTURE OF TYPE OF SOFWARE REFFERENCE O’Brien, J. A. ,Marakas, G. A(2011). Management information system. American : new york: McGraw Hill. Wikipedia, the free encyclopedia