Free Essays

Design and Development for a Charity Website

1. Specification of requirements for the web-site

1.1. Introduction

The charity organization has asked the manager to project-manage the design, development, and launch procedures of the web-site, specialists for which are going to be outsourced taking into account the fact that the charity doesn’t have web-specialists in-house. In short, it becomes the sole responsibility of the project manager to specify the functionality of the web-site in order to fulfil the requirements of the charity organization, event organizers, and donators; moreover, the manager will have to draft a process schedule or plan of work lifecycle, demonstrating critical points in time when charity organizers will have to be involved into web-development process.Further, the project-manager is responsible for ensuring that a viable back-up solution is put in place for recovery of the financial and other data in case of information loss or damage. In simpler terms, the task is to plan initial stages in the project prior to any technical implementations, so that issues or potential problems associated with that will be foreseen early on and recovered or mitigated with minimal losses and expenditures.

1.2. Research on identical web-projects

The project manager has researched the web-space on similar or more or less identical charity web-sites and the following strengths and useful features for all of them have been emphasized:

All charity websites must be visually engaging, however stay moderately interactive, so that people with visual, hearing impairments and learning challenges can easily access and enjoy the contents. Therefore, some overly sophisticated and distracting features, like sound, must be omitted.
Homepage design is a highly substantial part of the whole web-site, whereby the single page can persuade a single user or the entire funding organization to donate to the cause, allow for fast and speedy navigation, and introduce to the objective of the current web-project. Therefore, it is clear from the outset that the precise range of target users must be identified first so that the web-site homepage is oriented towards those users.
Support for social media must be leveraged properly by means of seamless integration between the website and social media channels so that the user shares his experience on social networks and forums (Facebook, LinkedIn); it might be feasible to implement forum directly on the website, but technically it will involve extra financial and time costs.
The content management system might be adjusted to the needs of volunteers/supporters/beneficiaries through Intranet, for example, by allowing logins and personal profiles to the volunteers for sharing comments. The additional feature would be to allow volunteers to take part in creating, updating, and complementing the web-site content by making suggestions and recommendations as well as telling different stories: this feature helps attract more volunteers and members.
Clients will also have to be cared for: clients might even take parts in forum discussions and idea-sharing regarding some acute topic, like orphan children or homeless; users of the website can be allowed to publish their own works of art, drawing, or essays on the website, which, although time and budget-consuming, may facilitate more user engagement and participation in the charity cause.
The website must follow the rule of cost-effectiveness: in other words, the website design and implementation investment will have to be paid off by the returns in the form of donations.
There must be option for online donations and online fund-raising through the usage of direct mails or special appeals
Properly constructed funding application especially for medium and large donators is present in most sites
Some websites even offer subscription membership options for money; moreover, sites can allow all members, including users/volunteers/donators to purchase the charity’s own merchandize and book trainings for fees (related to the charity causes of course)
Search engine optimization is concerned with adjusting site content to search engine or Google indexation, so that the content must be of a good quality and easily searched by relevancy.
Some charities are advertised on Google platforms for free: those charity web-sites gain Google grants by submitting properly constructed grant applications to Google Corporation.
Content of the website must be unheavy and precisely communicating the objective of the web-site; moreover, it must be periodically updated with news and other novices.

Almost all of these features are considered important for a charity web-site and therefore can be implemented in the current project.

1.3. Identifying stakeholders

Web-project stakeholders are identified as follows:

Charity organization-the primary customer and stakeholder; the final decision is made by the charity management
Event organizers-volunteers and members of the web-site that will arrange events for fundraising and post the events on the web-site
Donors- users that will donate money either online or by filling out the application form
Web-designer and developer- the parties responsible for implementing the web-site

1.4. Setting up web-site requirements

Taking into account all stakeholders in the process of web-site build-up and the commonest features of existing charity web-sites, the following requirements for the particular web-site have been set up:

v Purpose

The primary purpose of the web-site is to communicate the importance of helping fellow humans who suffer from various insufficiencies to all potential users of the website; this can be done through collaboration of the charity organization with event organizers and posting different events on the site’s homepage.

v Look and feel

The homepage design must be visually engaging but clean and neat without any extra distractions: it is possible to include one reach Flash movie at the top of the page for adding dynamics to the page and greater interactivity. The homepage structure might be organized into 4 sections, including banner with logo, navigation bar, login, sign up, and search options; dynamic main content section with updatable content and news; side-bar section for advertisements of membership and merchandize, new and up-coming events; footer with the information about the charity, contact details, and site map.

v Performance

For large funding institutions it will most likely be essential to be able to fill a properly constructed funding application form and submitting it online by uploading it back on the website or send by post. In the navigation bar there will be option for donors to donate either online, using PayPal or credit/debit card services, or by submitting application form. Single donors are likely to pay straightaway online. However, the site is mostly oriented towards large funding institutions
Some event organizers will be able to post their ads and offer trainings or charity cause-related merchandize to donors on the side-bar of the pages through CMS and personal logins
Users should be able to navigate easily through the website and in case they wish to donate online, they should be able to specify on their own the sum that they donate
Charity organization can update or make amendments to the website content through CMS, with their own administrator login

v Functionality

The web project should have a content management system for administrator or charity organization itself to constantly make updates and amendments to the content of existing pages, rearrange the site structure and reassemble menu, monitor commenting in forums, control user registration, and administer online shop: this can be done by means of Extranet/Intranet and administrator login.
Extranet will also allow members (basic membership for free, premium membership- for some amount of fees) to login to their personal profiles and make comments, take part in forum, or post their own works of art or writing to the website for a public use or for small fees; the money from premium membership will go to charity causes.
All users that want to take part in the charity’s active social live or organize events for charity causes will first have to register with the website, submit their details as follows:
Full name
Country of residence
Date of birth
Current address/post code
E-mail and telephone number
A particular charity cause they are interested in

v This volunteer-type access will ensure that volunteers can also make minor amendments to their posted ads and events, and will be constantly sent newsletters or alerts from the web-master.

v Security

Security issues are related to web-site hacking and vicious malware that may block the content of the website from showing up, may trigger alerts popping up to the user trying to access the web-site, may suddenly decrease the traffic, make malicious modifications to the web-site files, code, and root folder and compromise the web-site content, down to disabling the administrator from accessing the content and damaging or deleting the business data, thereby leading to the loss of business and reputation of the site. Besides, web-sites featuring embedded blogs, forums, CMS or image galleries are particularly vulnerable to injections of hidden illicit content that is not always noticed from the first sight. For ensuring against such accidents on websites and blogs there are different Website Security monitoring systems, such as WebDefender. However, currently many web-design and development agencies offer hosting services which also include technical support packages and security features already embedded into the system. In principle, security is going to be implemented through the use of appropriate software that hosting organization can provide alongside with preventive measures that the web-administrator takes to monitor the content flow and the files being uploaded by means of CMS.

2. Web-project lifecycle

2.1. Specifying the project execution phases

The web-site project was determined to be oriented towards iterative lifecycle, depicted in the figure 2.1. The advantages of iterative lifecycle include greater interactivity and process control by the customers, which will allow completing one full cycle first and then deciding if the complete product of the cycle satisfies the requirements; if the charity management is dissatisfied with the finished product of one lifecycle, the entire lifecycle starts again until the web-site complies with all wishes and requirements of the committee.

1. First meeting and analysis of the prerequisites: discussion of the site requirements and purpose with both charity management as well as with web-designer and developer; arrangement of kick-off meetings or the communication means throughout the project.

2. Preparing the proposal: specify the site requirements together with costs involved in the project proposal, which is presented first to the managers and after having obtained their agreement, goes off to the web-designer and developer.

3. Design: web-designer comes up with a template(s) front-end for the web-project in collaboration with prototype functionality of the website generated by developer.

4. Content: the content is developed in collaboration with web-designers, managers, and interested event-organizers.

5.Design and content approval: combined design and content are presented to the charity management/committee and passed on to the next stage in case of approval.

6. Coding/developing phase: once the design and content are approved by the charity management and several important event-organizers, the developer builds design-consistent back-end of the site, using appropriate platform and commercially-viable framework. As a result the coding phase produces the dynamic content of the web-project.

7. Heavy web-application testing: different types of testing should take place after the completion of the design-coding processes as to ensure the user-compatibility and loading/traffic resilience. Testing will most probably be done by software testing specialists who will generate a report and sign off the web-site if it contains no bugs and complies with the above-mentioned requirements.

8. Final web-site approval meeting and presentation: charity organizers will have to approve of the final product and sign off the actual web-site completion phase.

9. Web-site promotion: official web-site domain and host service registration together with engine injection; applying for advertisement space-grant on Google.

Maintenance and updating: rather continuous process and will have to be systematically utilized for web-site technical support and content management.

2.2. Gantt chart and schedule

For the convenience, the project manager can construct the schedule and Gantt chart of his own contributions to the project. Basically, his schedule will not include project execution details and technical implementations, but very broad picture of basic project stages. Both his schedule and Gantt chart are shown in the table and figure consecutively. The ongoing assumption is that the project inception started on March 1 and proceeds till 20 of April of the next year, thus taking roughly 9 months of time. Table 2.2 and figure 2.2. show the visual representation of the time allocated to the web-project.

1.3. Approximate cost of the project

The costs presumably involved into the process can be described in the following table (some unpredicted costs or contingency expenditures are not taken into account):

3.Web-site back-up systems

3.1. Introduction

It is inevitable that the web-site project should be backed-up by not only hard drive on the computer, which will be prone to sudden damages, but also on other reliable media as well as somewhere in the remote location, so that if one location happens to experience flood, fire, or other emergencies, the data is still secure and kept safe. As the website contents are going to be dynamic, the updated contents should be backed up regularly as well.

3.2. Backing up online

There are different ways to back up the system, not least of them resorting to external parties to back up your data online, so-called application cloud services or remote back-up services. Cloud servers are best to exploit when there is little computing resource in-house to maintain the site regularly; the companies like Backup Technologies, Mozy, Safesync Trend Micro, Norton, M4 systems utilize special software on their remote servers for recovery and back-up of files, e-mails, and databases. However, there are security concerns related to online backing up as, although slightest, there is a chance of hacking the servers on the network and damaging the data.

1.3. Physical onsite backup measures

Another option would be to exploit physical back-up such as tape drives. The only concern about the tape-back-up is its costliness: tape drives are the most reliable media for backing up large chunks of data and therefore can cost up to ? 700 for a drive.

Redundant Array of Inexpensive Disks (RAID) is another popular option for storing and backing up the web-site in-house, on internal servers. RAID systems nowadays can come already embedded into end-user interfaces, although the possibility of purchasing RAID externally for a charity office server will allow for wider and more relevant choice to be made. RAID systems have three most substantial advantages over other back-up systems in terms of redundancy (if one drive of the whole bunch of hard drives on RAID gets damaged, it can be easily replaced without affecting other disks, usually using mirroring technique), increased performance (dependent on versions of RAID used and the number of drives, usually RAID 0+1 version), and lower costs as compared to tape drives (for the charity the RAID used is one with 4 TB of storing capacity with the moderate cost up to ?500.

1.4. Recommendation

For the current web-project it is decided to use RAID backup system, which will cost ? 500 together with online back-up for ?30 a year. This solution is the most viable as it ensures against data damage and loss both online and offline, thus creating double fortification of the invaluable business content.


Anon, Practical Guide to Dealing With Google’s Malware Warnings. Available at: [Accessed April 1, 2011].

Anon, What is RAIDAvailable at: [Accessed April 1, 2011].

Anon, Web site development process – The processes and steps. Available at: [Accessed April, 2, 2011].

Anon, Online Backup Software | Carbonite. Available at: [Accessed April 3, 2011].

Anon, UNICEF UK: Children’s Appeal. Available at: [Accessed April 3, 2011].

Anon, Sponsor a Child | Child Sponsorship | Children’s Charity | Sponsor Children : World Vision UK. Available at:,PPC,&gclid=CIDu2ZXYzqgCFcRtfAodkBM9jg [Accessed April 3, 2011].

Anon, Action for Sick Children – Welcome :: Available at: [Accessed April 3, 2011].

Free Essays

Network Design


I have been asked to research and compare two of the most widely used internet security protocols, Transport Layer Security (TLS) and Secure Shell (SSH). In this report I shall research both protocols and then compare the two listing similarities and differences in how they operate as security protocols. I shall examine the features of both giving advantages and disadvantages, examples will be given for both security protocols and any infrastructure needs.

As per instruction I will be using varied sources for my research including books, magazines and the internet, as with any report I shall reference all of my sources of information.

Transport Layer Security

Today the need for network security is of uppermost importance. We would all like to think that data is transmitted securely, but what if it wasn’t. Credit card crime for example would be a lot easier if there was no network security. This is one of many reasons why we need network security, and to achieve this we need protocols to secure the end to end transmission of data.

An earlier protocol that was widely used in the early 1990’s this was the Secure Socket Layer protocol (SSL). SSL was developed by Netscape but had some security flaws and used a weak algorithm and did not encrypt all of the information. Three versions of SSL where developed by Netscape and after the third the Internet Engineering Task Force (IETF) were called in to develop an Internet standard protocol. This protocol was called the Transport Layer Security (TLS) protocol. The main goal was to supply a means to allow secure connections for networks including the internet.

How it works

The Transport Layer Security protocol uses complex algorithms to encrypt information as it is sent over the network. The protocol comprises of two main layers the Transport Layer Security Record and the Handshake Protocol.

TLS Handshake Protocol

The TLS Handshake protocol is used to; in principle agree a secret between the two applications before any data is sent. This protocol works above the TLS Record protocol and sends the secrets in the order in which they have to be sent. The most important feature here is that no data is sent in securing connection, the first bit sent is a start bit to the whole process and only when secure connection achieved is data sent over the network.

TLS Record Protocol

The Transport Layer Security Record encrypts the data using cryptography and uses a unique key for connection which is received from the Handshake protocol. The TLS Record protocol may be used with or without encryption. The data which has been encrypted is then sent down to the Transmission Control (TCP) layer for transport. The record also adds a Message Authentication Code (MAC) to the outward data and confirms using the MAC. I have used the image below to show how this is achieved.

Where TLS is used

The Transport Layer Security protocol is normally used, above any of the Transport Layer protocols. So the TLS protocol operates at Open Systems Interconnection (OSI) level 4, where it joins itself to other transport layer protocols, for example Hypertext Protocol( HTTP) and File Transfer Protocol (FTP) although its main partner is Transmission Control Protocol( TCP).

Main area of use would be the internet in applications that need end to end security. This data is usually carried by HTTP and with TLS becomes HTTPS. TLS is therefore used to secure connections with e-commerce sites. VoIP also uses TLS to secure its data transmissions.” TLS and SSL are most widely recognized as the protocols that provide secure HTTP (HTTPS) for Internet transactions between Web browsers and Web servers.” (Microsoft, 2011)

The Transport Layer Security protocol is also used in setting up Virtual Private Networks (VPN), where end to end security is a must but again is used alongside other protocols.

How Secure Is It?

Secure Shell

The Secure Shell (SSH) is used for safe remote access between clients through an untrusted network. SSH is widely used software in network security. The need for such protocols is paramount in today’s technology based world. In the modern office for example employees may wish to transfer files to their home computer for completion, this would be an unwise option if it wasn’t for security protocols. A man in the middle attack could take place by listening on the network for traffic and picking up all your company secrets or personal ones.

How it works

The Secure Shell develops a channel for executing a shell on a remote machine. The channel has encryption at both ends of the connection. The most important aspects of SSH is that it authenticates the connection and encrypts the data it also ensures that the data sent is the data received.


TLS protocol. (2011, 03 23). Retrieved March 23, 2011, from wikipedia:

Microsoft. (2011, March 23). What is TLS. Retrieved March 23, 2011, from Microsoft TechNet:

Free Essays

Investigate the various types of assessment, and how they impact the Design & Technology classroom


In the essay I explore the contribution that assessment makes towards learning. I investigate the various types of assessment, and what impact they have on the learner, drawing upon my own experience in the Design Technology classroom. In particular, I review the summative and formative ways of assessing and conclude that formative assessment is more beneficial to the learner as they gain new knowledge and skills to inform their learning, with the feedback given through this process. Conversely, summative assessment can sometimes cause problems within the classroom as children try and ‘be the best’. To bring the essay to a close, I discuss ideas for the future regarding assessment in Design Technology and what I think should happen.

The term ‘assessment’ “is how pupils recognise achievement and make progress, and how teachers shape and personalise their teaching.” (QCA, 2009) In the past assessment was “seen as something distinct from learning;” (Chater, 1984, p4) contrasting this view in a recent review on assessment Daugherty (2002) found it to be:

One of the most powerful educational tools for promoting effective learning… the focus needs to be on helping teachers use assessment, as part of teaching and learning, in ways that will raise pupils’ achievement. (Daugherty, 2002)

Daugherty, a member of the Assessment Reform Group, is raising a well-founded point, as he is well researched into ‘assessment,’ making government policy but also works closely with teachers and local education authority staff to advance understanding of the roles, purposes and impacts of assessment. Teachers planning should include strategies to ensure that learners understand the goals they are pursuing and the criteria that will be applied in assessing their work.

OFSTED reports can often be seen as biased and its independence questioned, being dubbed the “Government’s ‘poodle’ during a Commons committee hearing” (Stewart, 2009) and inspections seen as an “instrument of state control” forcing teachers to follow politicians’ agendas.” (Shaw, 2009) Nevertheless, this report raises good points to be considered by teachers who strive to use assessment in their teaching, hence the citation.

This type of on-going assessment described in the report is known as formative assessment. It is common for assessment to be divided into either formative or summative categories for the purpose of considering different objectives for assessment practices, although they can overlap. Summative assessment is generally carried out at the end of a course or project. In Design Technology, summative assessments are typically used to assign students an end of topic grade. Formative assessment is generally carried out throughout a course or project and is used to aid learning.

Summative assessment is the assessment of learning and in Design Technology it provides evidence of student achievement for reporting and accountability purposes. Its main purpose is to make judgements about performance. An example of this is the norm-referenced tests (NRT), which classifies students. NRTs draw attention to the achievement differences between and among students to produce a dependable rank order of students across a continuum of achievement from high achievers to low achievers (Stiggins, 1994). Schools use this system to place pupils in ability groups, including Gifted and Talented. However, it is argued that “Assessment should be a powerful tool for learning, not merely a political solution to perceived problems over standards and accountability.” (ATL, 1996) This is reinforcing Daugherty’s idea, as it perceives assessment as a tool, a working progress- formative assessment, not an end product- summative assessment.

Formative assessment is Assessment for learning and in Design Technology it helps to inform the teaching and learning process by identifying students’ strengths and weaknesses. Its main purpose is to gather information.

Diagnostic assessment, which helps to identify specific learning strengths and needs, can fall into both categories. It determines learning targets and appropriate teaching and learning strategies to achieve them. This is important because:

Many learners have higher-level skills in some areas than in others. Diagnostic assessment happens initially at the beginning of a learning programme and subsequently when the need arises. (QIA, 2008)

Therefore; it can be summative, as it results in a grade and the student is placed in an ability group on what they already know. However, this “information is used to make links to progression routes and prepare for the next steps;” (QIA, 2008) thus becomes formative, as they discover the gaps in their knowledge and learn how to fill these gaps.

A type of formative assessment is a criterion-referenced test which determines, “what test takers can do and what they know, not how they compare to others.” (Anastasi, 1988, p102) Assessment for Learning ensures that pupils understand what they can do, but are also informed how to improve on what they find difficult, and what type of learning process they must take to achieve this.

This formative assessment:

Forms the direction of future learning and so the requirement of formative assessment is that the feedback given back to the learner helps the learner improve, but more importantly that the learner actually uses that information to improve. (Marshall, 2002, p48)

Feedback for learning in Design Technology is vital. The teacher will take pleasure in rewarding students with praises; however, there is more valuable feedback that they should receive, as Black & Wiliam found:

Pupils look for the ways to obtain the best marks rather than at the needs of their learning which these marks ought to reflect… They spend time and energy looking for clues to the ‘right answer’. (Black & Wiliam, 1998)

In Design Technology, a subject in which there is seldom a ‘right answer,’ it is essential that “we focus on promoting learning instead of encouraging students to seek the easiest way to get the best results.” (Branson, 2005, p76) This indicates that the summative assessment is preventing the student reaching their full potential through learning, as they want to be the best in the class; therefore, will rote learn and be ‘taught-to-the-test’ to achieve this top grade. This could mean that student is not learning, but remembering facts for the test, and once the test is over they will not retain much of the knowledge. Nevertheless, the summative results could be used as part of a formative assessment (Black & Wiliam, 1998) if the correct feedback was given to them instead of just a grade.

This feedback will only be effective if the quality of teacher-pupil interaction is high and provides, “the stimulus and help for pupils to take active responsibility for their own learning.”(Black & Wiliam, 1998) To create effective feedback we must “teach less and talk about learning more.” (Branson, 2005, p77) This is known as meta-learning which draws upon goals, strategies, effects, feelings and context of learning, each of which has significant personal and social dimensions:

Those who are advanced in meta-learning realise that what is learned (the outcome or the result) and how it is learned (the act or the process) are two inseparable aspects of learning. (Watkins, 2001)

If students practise these skills they will be able evaluate work successfully, apply their assessment criteria to their work and their peers’ work. Through this greater understanding of their own learning, the students will have the “ability of the performance” (Marshall, 2002, p57) and be able to apply the knowledge and strategies they have acquired to various contexts, transferring their skills to suit the situation.

Good day-to-day indications of students’ progress are tasks and questions that prompt learners to show their knowledge, skills and understanding. What learners say and do is then observed and interpreted, by teacher and peers, and judgements are made about how learning can be improved. These assessment processes are an important part of everyday classroom practice and involve both teachers and learners in reflection when talking about new targets. The questions posed should be open-ended, allowing the student to fully express themselves and ensuring that they will not ‘lose face,’ as there is not a right or wrong answer. If a student finds answering a question difficult, a peer can step in and help, which can have a positive effect on the class as there are “things that students will take from each other that they won’t take from a teacher.” (Marshall, 2002, p48) In turn, peer assessment helps develop self-assessment which promotes independent learning, helping children to take increasing responsibility for their own progress.

An example of good practice I have seen in an Design Technology classroom is ‘PEN marking’ Positive, Error, Next Time, in which students would pen mark their own work and assess each others work looking for two good aspects about the piece, and an improvement. This way the students are praising each other; therefore, they are not scared to suggest an improvement. Through assessing their peers work, they also find ways to improve their own. This is subjective as it is my own opinion, but does relate to what Marshall’s theory- that they will take from each other that they would not from a teacher, as several ‘wishes’ from the students sounded harsh but I found that in their next piece of work they had tried harder at it. However, the work may also have improved if the teacher had said it, so this theory is not infallible.

The OFSTED report states that:

Many pupils were still not clear about what their strengths and weaknesses were or how they might improve. (OFSTED, 2009, p14)

Assessment for learning states that for effective learning to take place students need to understand what it is they are trying to achieve, and want to achieve it. Understanding and commitment follows when they have a part in deciding goals and identifying criteria for assessing progress. Communicating assessment criteria involves discussing them with the students using terms that they can understand, providing examples of how the criteria can be met in practice and engaging learners in peer and self-assessment.I think the problem of pupils not being clear about their strengths and weaknesses can be solved with the introduction of Assessing Pupils’ Progress (APP) into schools. The school where I am doing my placement is using the APP process for the first time this year, and so far are finding it successful. APP is a ‘systematic approach to periodic assessment that provides diagnostic information about individual pupils’ progress and management information about the attainment and progress of groups’. (DfCSF, 2008) A key purpose of APP is to inform and strengthen planning, teaching and learning. This aspect of APP can have a direct and positive impact on raising standards, and can assist in the personalisation of learning.

Based on the assessment focuses (AFs) that underpin National Curriculum assessment, the APP approach improves the quality and reliability of teacher assessment. My school have simplified the APP focuses and levels into student speak so they can fully understand the concept and purpose. All students in KS3 are now fully aware that they will have an APP assessment in Design Technology at the end of every half term. The assessment will be based upon the scheme of work studied over the half term. For example the last assessment was to write a character description: the scheme studied being fiction. The Design Technology teacher has an expectation that every individual child should attain two sub-levels a year; the student is also aware of this. Before the student completed the final assessment they assessed a Character Description supplied by the teacher, using the same AF’s that they were going to be assessed on. This allowed the students to see exactly what they had to do to achieve a Level 5, as one pupil pointed out that, “Even though they’ve put their ideas together in order Miss, they haven’t used paragraphs so they can’t get a Level 5 for AF3”. This process of evaluation helps the student progress in their work, as they can see clearly what they have to do to improve.

Ultimately, I think that the contribution of assessment has a huge impact on pupils’ learning; with well focused feedback, including thorough marking that identifies clear targets, students can progress and become independent learners, a foundation preparing for their independent life. I think that APP alongside Assessment for Learning is a good way for the student and the teacher to gauge progress, as the objectives are clear, and the ways to achieve them are made obvious through ‘pupil speak’. This does not mean that I think summative is an incorrect way of assessment, as I echo the thoughts of Black & Wiliam (1998) in that if a summative assessment is used to inform the student for progression then it can have a positive effect. When I start NQT year, I hope to be employed in a school that uses APP, and if not I will try and implement it, as I think it benefits students as much as it does the teacher.


Anastasi, A. (1988). Psychological Testing. New York, New York: MacMillan Publishing Company

Association of Teachers and Lecturers. (1996). Doing our Level Best.

Black, P. and Wiliam, D. (1998) Inside the Black Box: Raising Standards through Classroom Assessment, Kings College London. [Online] Available from: [Accessed 20th October 2009]

Branson, J. (2005) ‘Assessment, recording and reporting’. In: Goodwyn, A & Branson, J. (eds). Teaching English: A Handbook for Primary and Secondary School Teachers. London: Routledge.

Chater, P. (1984) Marking & Assessment in English. London: Methuen & Co Ltd.

Daugherty R. (2002) Assessing for learning insides. [Online] 2002. Available from: [Accessed 21st October 2009]

DfCSF. (2008) Assessing Pupils Progress (APP) In English. [Online] Aug 2008. Available from: [Accessed 21st October 2009]

Marshall, B. (2002) ‘Thinking through Assessment: An Interview with Dylan Wiliam’. English in Education, 36 (3) p47-60.

OFSTED. (2009) English at the crossroads. London: Her Majesty’s

Stationery Office.

QCA. (2005) A national conversation on the future of English. [Online]. 2005. Available from: [Accessed 21st October 2009]

QCA. (2009) Assessment key principles- National Curriculum. [Online]. June 2009. Available from: [Accessed: 20th October 2009]

QIA. (2008) Initial and diagnostic assessment: a learner- centred process. [Online] 2008. Available from [Accessed 21st October 2009]

Scriven, M. (1991). Evaluation thesaurus. 4th ed. Newbury Park, CA: Sage Publications.

Shaw, M. 2009. ‘Ofsted inspections are means of state control’. Times Educational Supplement, 15 March. p.7

Stiggins, R.J. (1994). Student-Centered Classroom Assessment. New York: Merrill.

Watkins, C. (2001) ‘Learning about Learning Enhances Performance’ in National School Improvement Network Research Matters 13, London: Institute of Education.

William, S. (2009) ‘Ofsted accused of being ministerial ‘poodle’ over school report cards’. Times Educational Supplement, 10 July. p.33

Free Essays

Managing the successful design process of HVAC systems


A good HVAC system design plays a critical role in creating an optimal building environment. The design process of a HVAC system is complex process involving client’s needs, building regulation compliance, energy efficiency, environmental impact and sustainability. A lot of different professionals with distinct disciplines, such as clients, architects, structural and service engineers are involved in a building construction project. The design process involves constant communication and clarification between the different team members. By working together at key points in the design process, participants can often identify highly attractive solutions to design needs that would otherwise not be found (1). The effectiveness of the design process in the building industry has a great influence on the success of subsequent processes in the construction of projects and also on the quality of the environment (2). Several studies have also pointed out that a large percentage of defects in building arise through decisions or actions taken in the design stages (3). It is also said that poor design has a very strong impact on the level of efficiency during the production stage (4). In recent years, the increasing complexity of modern buildings in a very competitive market–place has significantly increased the pressure for improving the performance of the design process in terms of time and quality. Despite its importance, relatively little research has been done on the management of the design process, in contrast to the research time and effort which has been devoted to production and project management (5). This essay will concentrate on various issues related to the management of successful design process of HVAC system and put forward arguments to reflect the above.

History of HVAC System

HVAC is an acronym that stands for Heating, Ventilation and Air Conditioning. HVAC is based on the principle of thermodynamics and heat transfer. The functions of heating, ventilation and air conditioning are interrelated. HVAC systems provide thermal comfort and acceptable indoor air quality. Like many great innovations, earliest heating and plumbing systems originated with the Romans. A hypocaust(6) was an ancient Roman system of central heating / under floor heating; they were used for heating public baths and private houses. English historian Edward Gibbon mentions “stupendous aqueducts,” when describing the building of public baths in The History of the Decline and Fall of the Roman Empire(7). The Romans built an aqueduct that carried water for many miles in order to provide a crowded urban population with relatively safe, potable water. In modern buildings the design, installation and control systems for these functions are integrated in to HVAC systems.

Design Process and Management

Throughout the history of mankind, people have always designed things; it is human nature. It may take years to design a new system but it could be made in a matter of hours. When one is trying to design something, drawing is widely used as a most understandable form of communication. Designers sit down and brainstorm a lot of ideas, discard most of them until a suitable one is found for investigation at a more detailed level enabling the best to be chosen (8).

In the past the HVAC system was given less priority in term of design on the basis of sub-optimal consideration, such as preferences for certain types of systems, or equipment budget or space constraints imposed by architects (9). Design and construction were carried out by two different parties. Designers used to design the system and walk away. The contractors carried out the HVAC installation and commissioned the systems. Poorly designed HVAC systems pose health hazard and discomfort to the building occupants. The emergence of “sick building syndrome” led to the realization that the HVAC system itself acts as a breeding and concentration site for pathogens and allergens (9). The duct net work of a central air handling system poses fire hazard as these are ideal path for fire, smoke and explosive gases. They are also prime target for terrorists to release chemical and biological agent into a building.

Nowadays, traditional professional practices are replaced by multi-disciplinary practices. A fresher approach is needed to the planning and co-ordination for multi disciplinary buildings and their system designs to facilitate integration and communication across the disciplines (10). It has been pointed out that poor communication, a lack of documentation, missing input information, a lack of co-ordination between disciplines and erratic decision making is the main problem in design management (3). Designers try to achieve satisfactory or appropriate solutions. The design tends to take place through a series of stages during which design components are continually trialled, tested, evaluated and refined. Therefore, most design processes involve much feedback from the different individuals employed to design the system. It can be seen that defects in a finished building also shows the failure to communicate the known technological factors that have been accepted for many years during the design process. The problems caused seem to be due more to the deficiencies in managing communication during the design process, rather than technology failure (3).

HVAC systems for the modern buildings must become fully optimized. Comfort, health and safety function required for each area in the facility must be executed perfectly. Performance of a good HVAC system makes economic sense. Optimized HVAC systems reduce the capital cost and equipment space. They provide the best comfort and health which increases productivity (9). Life-cycle cost is greatly reduced because optimized systems operate with the least possible energy.

In recent years, the HVAC industry has been under immense pressure to reduce the energy consumed by HVAC plants and increase energy efficiency to conserve fossil fuels and reduce carbon emissions. HVAC systems in typical commercial buildings are responsible for more than 40 percent of the total energy output (11). Low and zero carbon technologies can be integrated into the HVAC systems to achieve sustainability. Properly designed HVAC systems run at peak efficiency maximising energy used without compromising thermal comfort or indoor air quality. It requires an integrated design approach. It has been pointed out that by adopting this type of design, high performance with multiple benefits can be achieved at a total cost lower than all the components used in the project (1). The design process needs to be well planned and controlled, in order to minimise the effects of complexity and uncertainty (12). As the involvement of interdependent professionals’ disciplines with concurrent design processes is now the norm, well managed design processes with effective communication is the key to minimising errors that could lead to defective buildings or systems in the future (3). A successful HVAC design process involves interactive efforts, Co-ordination and project programming


HVAC systems play an important role in keeping a building comfortable. The design of HVAC systems involves working with a team of professionals demonstrating various disciplines. Design errors should be prevented or identified during the design process. These design errors are costly in time, rework, money and lost reputations. Effective design of sustainable HVAC system is needed to make buildings survivable in a current climate of very high energy costs. Managing the HVAC design process successfully saves considerable time and money as well as delivering projects on time and within budget.


1. Building Design. Fedral Energy Management Program. [Online] US Department of Energy. [Cited: 17 April 2011.]

2. Formoso, C. T., Tzotzopoulos, R., Jobim, M. S. A Protocol for Managing the Design Process in the Building Industry. London : Spon, 1999.

3. Cornik, T. Quality Management for Building. Rushden : Butterworth, 1991. p. 218.

4. Ferguson, I. Buildability in Practice. London : Mitchell, 1989. p. 175.

5. Manipulating the Flow of Design Information to improve the Programming of Building Design. Austin, S., Baldwin, A., and Newton, A. London : spon, 1994, Construction Management and Economics, Vol. 12 (5), pp. 445-455.

6. Fagan, Garrett G. Bathing in Public in the Roman World . Michigan : The Unversity of Michigan Press, 2002. pp. 56-66.

7. Gibson, Edward. The History of the Decline and Fall of the Roman Empire. London : Strahan & Cadell, 1837. p. 433. Vol. 1, chapter XXXL.

8. Cross, N. Engineering Design Methods: Strategies for Product Design. Third Edition. London : John Wiley and Sons Limited, 1994.

9. Donald R. Wulfinghoff, P.E. The Future of HVAC. Energy books. [Online] [Cited: 19 April 2011.]

10. A data flow model to plan and manage the building design process. AUSTIN, S., BALDWIN, A. & NEWTON, A. No. 1, 1996, Journal of Engineering Design, Vol. 7, pp. 3-25.

11. Energy Consumption in the United Kingdom. DECC. [Online] [Cited: April 15 2011.]

12. Pennycook, K. Design Checks for HVAC. Second edition. s.l. : BSRIA, 2007.

Free Essays

Report analysis and design solutions for integration of enterprises information systems


This report analyses and design solutions for integration of enterprises information systems based on the business case. In addition, it develops the key functions as a part of the enterprise system for the given business case by SAP. This report through the information of case and relevant concepts and approaches, academic interests and practical cases to analyse the issues and the suitable approaches for extended enterprise integration. Next, it explains the system functionalities and necessary illustration to design of solutions to the identified issues with selected enterprise integration approaches. Finally, there is a discussion and conclusion on implementation issues, achievement and limitations of the solutions.

1: A Description of the Business Case

1.1: Background of Business Case

This company is a medium sized manufacturer which supplies car control panels with frames over 2000 different products for five car manufacturers. Based on different models of frames, the company produces various types of car control panels for five contracted car manufacturers. In order to improve the satisfaction of customer and quickly respond to fluctuations in demand for customer, the company aims a relatively lean manufacturer and reduce the delivery lead-time.

In product of the company, it assembles these control panels, inside components and the frames as customers’ required. In addition, it also simply purchases all the parts from a part of suppliers including overseas suppliers. There are six bought-in components which include vents, frame, glove box, meter/gauges, steering and heater& air conditioning control. Besides, in order to safeguard its supply chain, the company may purchase from more than one supplier.

In business model of this company, it through EDI to receive orders from each customer once a week and then it will input the orders into a sales management system automatically. It needs to check its stock when the sales order confirmed by account manager. If there is enough stock, the order will be produced directly. If not, the company is necessary to place a purchase order for the required components to complete the sales order. Next, the company receives the ordered components on a daily basis. When the components arrived, the components will be scanned into a stock management system by the duty purchasing manager. What’s more, it will update the stock record information of component. During the production processing, the production manager will check customer orders first and create a schedule for the assembly line to produce for the orders. There is a spreadsheet; the schedules are recorded in it and to calculate stock values. The process is the same at the end of assembly and then the stock level will be updated accordingly. Finally, it will package and ship the finished products to the customers. In general, the company provides the lead-time for domestic customers is four-week and the lead-time for overseas customers is six-week.

1.2: Issues & Objectives in Business Case

According to the case, there are three main issues. Firstly, it is a long consuming lead time for order processing, production scheduling, stock controlling and purchasing. Secondly, it is high possibility to make a mistake and low effective of update the current system. Finally, the company system is not efficient. Due to the current system is only updated on a weekly basis, it is not always accurate. As enterprise information system, the current company system is frequently delaying the processing information which could slow response to changes and increase of stocks and order lead-time.

Therefore, the objectives of this company are shown as follows, to reduce the lead-time and stock level, have an assembly planning to improve the effect of production, more effective coordination of components supply and to be an efficient production company.

2: Brief Review on Relevant Concepts & Approaches, Academic Interests & Practical Cases

With development of trade globalises, it increases competition and standardisation; strengthen the relationship of strategic partnering; more outsourcing and rises project complexity. Therefore, business and enterprise trends to become e-business and extended enterprise integration gradually. J Gunn (2004) analysed that extended enterprise integration or enterprise integration using American terminology has long been foreseen as the solution to a wide range of problems, enabling companies to reduce time to market, to improve quality, to increase supply chain efficiency and even to understand customers better. In extended enterprises, integrated framework enables sharing of information, services and applications beyond organizational boundary with suppliers and customers. In addition, computer network of internet-enabled system infrastructure acting as a networked service environment for supply chain management. In summary, there are three core meaning of extended enterprises; information integration, organizational relationship linkage and coordination and resource sharing.

This part of project report shows a brief review on relevant concepts and approaches, academic interests and practical cases. In e-business and extended enterprise context, the brief review is related to enterprise system selection and implementation issues and integrations approaches.

Firstly, it is a brief review on relevant concepts and approaches, academic interests for enterprise system selection and implementation issues. Laudon, K.C. (2000) mentioned that ERP- enterprise resources planning is a business management system. It integrates all facts of the business, including planning, manufacturing, sales and finance. That is the reason why ERP could become more effective coordinated by sharing information. In order to the target of integrating information, it through eliminating complex links between computer systems and different areas of the business by ERP software and business automate processes. As small and medium enterprises (SMEs), there are six critical selection factors (Reuther, D. & Chattopadhyay, G., 2004), including system functionality requirements; business drivers; cost drivers; flexibility; scalability and others. The factor of system functionality requirements is the highest critical selection criteria Bemroider, E. & Koch, S. (2000) oberved that the system functionality requirements factor supports the findings of specialty and simplicity required for small or medium enterprise. Business drivers focus on the financial benefit to the company of the selected system. The detail of cost drivers is direct cost of the implementation in terms of outlay and resources. Both factors of flexibility and scalability are significant levels of criticality. What’s more, the response to the flexibility is important as the current wisdom is to match the future (Brown, C., Vessey, I., Powell, A., 2000). Furthermore, the factor of others means specific factors critical to the target business.

Bingi, P., Sharma, M.K., Godla, J.K. (1999) stated that Implementation issues in general have been long explored. However, the complexity of ERP makes it challenging to implement. ERP systems have been widely used by companies in developed countries. Organizations in manufacturing, service and energy industries adopt ERP to automate the deployment and management of material, finance and human resources; streamline processes and achieve process improvement and achieve global competitiveness (Koh, C., et al, 2000).

In addition, there are some important factors affect the implementation of ERP; including economy and economic growth; infrastructure; IT maturity; computer culture; business size; BPR (Business Process Re-engineering) experience; manufacturing strengths; government regulations; management commitment and regional environment (Huang, Z & Palvia, P., 2001).

Next, it is a brief review on relevant concepts and approaches, academic interests for integration approaches. Parr, A.N. & Shanks, G. A. (2000) indicated that ERP implementation approaches have been categorized as comprehensive, vanilla and middle-road. Comprehensive favoured by multinational companies and involving a total effort to implement all modules of the ERP package with business proves reengineering. Vanilla means an approach favoured by less ambitious companies desiring less business process reengineering and requiring ERP functionalities in only one site. The third approach middle-road is an approach that falls between the other two extremes.

In figure 1, it shows the evolution of enterprise integration approaches as follow.

Figure 1: Evolution of Integration Approaches

According to the figure 1, data transport is the plinth of enterprise integration. Both data transport and data integration are the basal enterprise integration. With higher business value of integration and complexity of integration, the enterprise integrations are application integration, process integration, collaboration and ubiquitous integration one by one.

As a type of enterprise integration, process integration is to create new process and services to support the actual business needs; ubiquitous integration is anytime, anywhere and through any standard means. Ubiquitous integration is the top enterprise integration approach, which is the highest business value of integration and complexity of integration in figure 1.

Based on the resources of e-business enterprise (2011), there are four enterprise system integration approaches, including network/portal oriented integration, business process oriented integration, application oriented integration and data oriented integration. Firstly, data-oriented integration is a general and basal approach. It targets the purpose of transfer, transform, synchronize, mediate-connect and harmonize. Besides, the handouts of enterprise integration of e-business enterprise (2011) explain that data-oriented integration is a set of technologies that exchange and synchronizing data with transformed format between different applications within and between organizations. Thus the technical components are data connectivity, transformation, communications middleware.

Secondly, application oriented integration (API) is an approach which uses a common interface to integrate enterprise systems by information between applications cross an organisation or cross a network. It is difference from data oriented integration. The data oriented integration interface is created by database. Instead of database, the integration interface of application is created by application. In addition, the application oriented integration allows access data and business logics and application methods; it is more than data oriented integration.

Next, the third approach is business process-oriented integration (BPI). As workflow systems, it connects and automates business process. What’s more, it provides enterprises with process visibility. Integration of business processes across applications and it controls over distributed workflows via an event driven platform. Compared with data oriented integration and API, BPI through business process level integration and management, not database and application. Furthermore, BPI includes process control technology, which includes process control engine, triggers and software agents for task automation. One purpose of BPI is loose coupled architecture – back office application, which is communicated with front end applications indirectly. Therefore, the loose coupled connection with greater adaptability and scalability for business system. However, the integration type of loose coupling also has some problem on security and inefficiency in communication.

Finally, the last approach is portal oriented integration approach. As integrated business portal, it has a consistent web interface for all business information and application in a personalized way and a platform for application integration, component development and workflow coordination. In addition, portal oriented integration approach has four benefits. One is rapidly deploy a complete portal. It also allows for further extension. Next, both applications and data could be created for integration. It could be automated execution of business processes throughout distributed organisations.

3: Analysis of the Issues in the Case to Identify Suitable Approaches for Extended Enterprise Integration

According to the business case, the main three issues are long lead-time, high possibility to make a mistake and lower efficient production and effective coordination. The issue of lead-time should be considered for the area of order processing, production scheduling, stock controlling and purchasing. The company receives orders from five contracted car manufacturers over 2000 different products. On the other side, in order to make sure the safeguard of supply chain, the company also needs to purchase vents, frame, glove box, meter/gauges, steering and heater and air conditioning control from domestic suppliers and overseas suppliers. Next, due to the company receives order once a week and the system only update once a week, the speed of update information is very low effective. It is easy to make a mistake on purchase and stock level and slow response to changes. Furthermore, it increases the stocks and order lead-time. Therefore, it leads to the second and third issues in this situation.

A suitable approach for extended the extended enterprise in this business, it should be able to ameliorate t or solve the issues and more close to the objectives. This business case mentioned that the company is a medium sized manufacturer, which supplies over 2000 different products for five contracted car manufacturers. It receives orders from each customer once a week through EDI (electronic data interchange) and input automatically into a sales management system. The operation process is a long pull process. A suitable business system of this company should be more quick response and effective coordination to reduce the lead time for order processing, production scheduling, stock controlling and purchasing. According to the part 3 of this report-brief review on relevant concepts and approaches, data oriented integration approaches focus on translating data and business documents from formats used by one company into the formats used by another (Lynne, M. et al, 2002). Integration of business processes of data integration approaches across database. However, BPI connects and automates business processes, the interfaces of BPI for data integration, process integration and process communication in process model. Although data integration approaches is an approach of transaction formats and standardize the names of product data, process integration approaches standardize the sequences of transactions and activities that make up a business process. By allowing for the monitoring and management of related transactions, they adapt better to breakdowns and permit higher levels of automation (Lynne, M. et al, 2002). Furthermore, Net R. (2001) observed that process integration requires standardization or modification of source systems or the creation of a supportive IT infrastructure, process integration is costlier to set up than data integration. However, except the factor of cost, process integration provides opportunities for companies to reengineer business processes to achieve additional business benefits more than data integration approach. Besides, portal oriented integration approach is very good approach. The interface of portal integration approach is available across all business information and applications in a personalized way. However, due to the size of company is only a medium manufacturer and the cost of portal integration approach is expensive than others, it is not suitable for the situation currently. Therefore are more risks, challenges and issues if the company adopt portal integration approach. Therefore, business process-oriented integration is the suitable approaches for extended enterprise at present.

3.1: Illustration of Key Information Flows in the Extended Enterprise Which Includes the Supply Chain Partners

Figure 2: Information Flows in the Extended Enterprise

In figure 2, the company receives orders from customer by ERP and then finance department send the order information to BIP system when the sales order is confirmed by account manager. Stock management sends the requirement of parts and components information and t9he information of the quantities of finished products to the business system. The spreadsheet created by BPI for the production schedule and also through BPI to update new information.

3.2: Critical Analysis of the Suitability of SAP as an ERP Tool

There is a professional web site of SAP (2009) expressed that SAP means systems applications and products, it is a completely integrated, enterprise wide information system that replaces legacy systems with a series of software modules that communicate with each other seamlessly, replacing current business processes with best practices. SAP software also has some demerit. In SAP system, although it could changes the business process dramatically, it is a little customizable. However, SAP is also a very outstanding tool for ERP and E- business. In addition, there are a lot of advantages of SAP, such as online integrated graphics; functionality and integration; flexible structure; real-time information; lean implementation; individual solutions, etc (SAP Expertise, 2011). SAP makes the information of company more meaningful. The company can instantly see any change by type of graphics, if the data had any change. Furthermore, the controlled customizing procedures of system allows to create solution for satisfy individual requirements. In SAP of ERP, all business processes of the company are linked by data and functions and it is a software solution to cover all commercial processes and transactions commonly occurring of the company. The real-time information of SAP is a good visibility of distributed data sources and it can automatic data transfer. Therefore, the company could be more quick response to changes and more effective to avoid the mistake by SAP. Then it should be able to reduce the stock level and more effective coordination of customer and supplier.

4: Design of Solutions to the Identified Issues with Selected Enterprise Integration Approaches

Due to the selected suitable enterprise integration approach is BPI, this part explains the system functionalities and design the solutions for the issues. According to the background of the company, the integration design should be considered by application complexity, cost and time; business scale and nature; business relationships; business process dynamics and function distributions; demands on real-time information; technical standards and compatibility, etc.

Firstly, the company should add a materical resource planning (MRP) in BPI. Because safeground the supply chain for the company, the company purchases all parts from other domestic and overseas suppliers when the stock was insufficient. It is too long pull process to increase the delay of information. Consequently, MRP is a good tool to slove this problem and also sfaeground the supply chain of the company. What’s more, it could be ameliorate the long lead time in purchasing.

Next, the company should choose loose coupling and hub connection & spoke intergraiton technology. Types of integration and E-business includes loose coupling and tight coupling. The application of tight coupling are connected with agreed technical details. In contrast, Loose coupling does not need to know details of the ways to deal with interfaces of other applications and processes. Besides, it is an application send or receive to or from other applications. Loose coupling can support non-intrusive (loose) integration, sychronised business transactions; reusing and sharing business data and processes (Papazoglou M. P., 2006). loose coupling is flexibility, scalability and advanced security. It is also nearly real-time agile response to business events. Loose coupling could be avoid the delay of information issue and improve the lead time in order processing and stock controlling. Therefore, it is more suitable for the manufactory company in the business case.

Fingure 3: Features of Common Integration Technologies

In fingure 3, it shows the features of point-point integration, hub connection & spoke for SMEs, web form (extranet interfaces) and XMLmail / messages. It is obvious to analyse that both point-point integration and hub connection & spoke are better in the four common integration technologies. The integration knowledge requirement of hub connection & sopke is lower than point-point integration. Compared to point-point integraion, hub connection & spoke have lower set up and maintenance costs. However, the limitations of point-to-point topology consist of costly for maintain; limited reusability; invasive integration approach requiring modifications of source applications, not scalable, etc. On the other side, Lynne, M., et al (2002) mentioned that hub connection & spoke approaches to external integration represent an important alternative to one-to-one integration approaches. It also approaches come in two flavours – data integration only and process (plus data) integration. Hub connection & spoke intergraiton technology could be reduce the rate of high mistake and lead time in production scheduling. In summary, both loose coupling and hub connection & spoke intergraiton technology are effective solutions for these issues.

In order to become an effective business intelligence, the requirements of business integration are real-time or responsive data access; good quality data from all relevant sources and data of relevant business functions. There are some measures such as transform data into valuable information, identify risks and opportunities, monitor and assess business performance and support decision making for enhancement and optimisation. According to the background, the company receives orders from each customer once a week. It is very low efficient production. So the times of received orders should be increase. All in all, these solutions should solve the long lead time for order processing, production scheduling, stock controlling and purchasing and improve the speed of information processing to avoid the situation of low effective of update the system. All of them are more effective coordination of components supply and assembly planning in e-business.

5: Discussion and Conclusion

In this part of project, it discusses the implementation issues, achievements, limitation of solutions and conclusion. Based on the background, the company is a medium sized manufacturer and suppliers over 2000 different products for five contracted car manufacturers. The purchase model of the company is that places purchase order for the required components to complete the sales order, if there is out of stock or insufficient stock. In addition, the business processes of the company are receiving orders; purchasing components (out of stock / insufficient); checking customer orders by production manager; created a schedule for the assembly line to produce for the order ; updating the assembly line; packing and shipping finishes products. There are three main issues which mentioned before, including long consuming lead time for order processing, production scheduling, stock controlling and purchasing; low effective of update the current system and high possibility to make a mistake; the company system is not efficient. Through analysis of the issues, BPI is the suitable approaches for extended enterprise integration.

If the manufacturing company attempts to implement the suitable approaches and solutions, it should need a high costs in equipment investment and it is hard to change the old enterprise system and hard to operate the solutions in actuality. Furthermore, there are two limitations for hub & spoke topology when it runs on single serves. One is single points of failure exist in that a failure of one hub propagates throughout the system. The other is limited scalability across the enterprise. Loose coupling also has two disadvantages of security concerns and inefficiency in communication. In summary, the company should be aim to a lean manufacturing with a short lead time and effective communication persistently.


Bemroider, E. & Koch, S., 2000, Differences in Characteristics of the ERP Selection Process Between Small or Medium and Large Organisations, Proceedings AMCIS, PP 1022-1028

Bingi, P., Sharma, M.K., Godla, J.K., 1999, Critical Issues Affecting an ERP Implementation, Information Systems Management, Vol.16 NO.3, Boston MA, PP 7-14

Brown, C., Vessey, I., Powell, A., 2000, the ERP Purchase Decision Influential Business & IT Factors, Proceedings AMCIS, PP 1029-1032

Christiaanse, E., Sinnecker, R., Mossinkoff, M., 2001, the Impacts of B2B Exchanges on Brick and Mortar Intermediaries: The Elemica Case, 9th European Conference on Information Systems, Bled, Slovenia

Gunn, J., 2004, BT Technology Journal – Extended Enterprise Integration, Springer Netherlands, P93

Huang, Z. & Palvia, P., 2000, the Impact of ERP on Organizational Performance: Evidence from Case Studies, Proceedings Decision Science Institute Annual Meeting

Huang, Z & Palvia, P., 2001, ERP Implementation Issues in Advanced & Developing Countries, Business Process Management Journal, Vol.7, No. 3, PP 276-284

Koh, C., Soh, C., Markus, M.L., 2000, A Process Theory Approach to Analyzing ERP Implementation & Impacts: the Case of Revel Asia, Journal of Information Technology Cases and Applications, Vol. 2, No. 1, PP 4-23

Kuhn, H., Bayer, F., Junginger, S.; Karagiannis, D., 2003, Enterprise Model Integration, Prague, Czech Republic, LNCS 2738, PP 379-392

Laudon, K.C., 2000, J. P Laudon Management Information Systems, Prentice Hall International, 6th Edition, PP22-23

Lee, Z. & Lee, J., 2000, an ERP Implementation Case Study from a Knowledge Transfer Perspective, J. Information Technology PP 281–288

Lynne, M., Axline, S., Edberg, D., Petrie, D., 2002, the Future of Enterprise Integration: Strategic and Technical Issues in External Systems Integration, Oxford University Press

Papazoglou M. P., 2006, International Journal of Web Engineering & Technology, Inderscience Publishers, Vol. 2, No. 4, PP 320-352

Parr, A.N. & Shanks, G. A., 2000, Taxonomy of ERP implementation approaches. In Proceedings of the 33d Hawaii International Conference on System Sciences

Reuther, D. & Chattopadhyay, G., 2004, Critical Factors for Enterprise Resources Planning System Selection & Implementation Projects within Small to Medium Enterprises, Micreo Ltd Press, Australia, P851

Resources of E-business Enterprise, 2011, Enterprise-Integration in Enterprise Information Systems, VITAL, University of Liverpool

Sherlock, J. & Reuvid, J., 2005, Handbook of International Trade: A Guide to the Principles and Practice of Export, PP 353-365

SAP Expertise, 2011, Advantages of SAP R/3, Viewed 7 May 2011,

SAP Expertise, 2009, What Can SAP DoViewed 6 May 2011,

SAP Techies, 2009, What are the Advantages of SAPViewed 4 May 2011,

Free Essays

Study into Drug discovery and Design


1. Background

Drug discovery and design is fuelled by the need for appropriate and effective treatment for disease. Initially discovery was achieved via empirical screening of vast libraries of molecules, which was incredibly effective. The majority of drugs currently in clinical use were discovered this way. However with increased technology, and a greater need for newer more effective medicines, structural biology has become a prominent tool.

The general principles behind drug discovery briefly discussed here include target identification and validation, and hit discovery or design to generate a lead which is then optimised.

1.1 Target Identification and Validation

A target is often a protein, however it can also be RNA, DNA or a carbohydrate. People who suffer degenerative, autoimmune and genetic diseases can be screened for genetic differences through genome wide association studies (Grupe et al., 2007)or systematic meta-analysis (Bertram et al., 2007). Infective organisms have genes that are very different to human genes that may be essential in the life or infective cycle of the organism and are thus useful targets that can be identified through bioinformatics analysis or loss of function mutant phenotype studies (Crellin et al., 2011). A structure-based technique includes structural genomics, which is the study of the structures of all proteins in a genome.

1.2 Hit Identification and the Generation of a Lead Series

Once a target has been identified and validated, small molecules that bind and in some way alter function must be discovered or designed, again there are a number of ways in which this can be done. Empirical screening has identified a number of drugs, however structure-based techniques are more and more commonly being used. 3D structures from X-ray crystallography data and, to a lesser extent, nuclear magnetic resonance (NMR) have been used to generate the information required for computational methods involving docking and screening. This has been useful, for example in the in-silico screening of G-protein coupled receptor (GPCR) binding molecules (Richardson et al., 2007), however most structure-based drug designs have come from the design of compounds based on the 3D structure obtained from X-ray crystallography or NMR, or via biophysical screening techniques involving surface plasmon resonance (SPR) or NMR. Structure based screening methods often require fragment based libraries. These encompass a greater number of potential molecules, within smaller libraries of compounds, this is possible because there are no large functional groups that would inhibit binding, and so result in attractive starting points for hit discovery (Nordstrom et al., 2008).

To validate or measure the properties of the hits, crystal structures can be evaluated and additional information from secondary SPR screens, thermal information from isothermal titration calorimitry (ITC), and differential scanning fluorimitry (DSF) can be used to complement the data (Retra et al., 2010).

1.3 Lead Optimisation

Once validated, the structure is optimised. This can be based on ligand binding structures in NMR and X-ray crystal structures, or increasingly, in-silico modelling based on the pharmocophore hypothesis involving the evaluation of chemical and functional groups that may bind important sites of the target molecule (Voet et al., 2011).

The key structural techniques involved in structure-based drug design are X-ray crystallography and NMR, though mass spectrometry can also be used to observe proteins in multi-protein complex interactions. X-ray crystallography generates 3D structures of the protein of interest from crystals generated by altering conditions such as buffers, pH, temperature and the format; nanodrop, hanging drop as well as others (Giege & Sauter, 2010). These crystals are homogenously packed and stored in cryo-protecting buffers so that they can be stored in liquid nitrogen which protects them from the X-rays used to generate the 3D structural information (Philippopoulos et al., 1999). Once obtained, if the ligand of interest is soluble and has relatively high affinity for the target protein, co-crystallisation studies can be used to look at interactions of the different ligands of interest with the target protein. This is not always possible; however there have been improvements since the advent of fragment based libraries. NMR based structures are more time consuming to construct, requiring the analysis of NMR peaks of different spectra to associate them with specific nuclei to generate restriction information to produce a structure. Though more time consuming, it is incredibly useful if other forms of structural information are not available (Zou, 2007). In structure-based drug design NMR has been more useful in ligand protein interaction studies (Pellecchia, 2005), but has also been used in screening libraries for hit molecules (C. Murray et al., 2010). Mass spectrometry can be used in each stage of drug discovery (Deng & Sanyal, 2006), especially as technology advances, however it is much more limited (justification?) than those methods or techniques already mentioned and so will not be discussed in any great detail.

To complement these techniques there are a vast array of technologies available, a few of which are mentioned below. SPR measures interactions between the target protein and potential hits in biosensors, and can also be used in hit validation and optimisation in secondary screens (Retra et al., 2010). Since fragment based screening, SPR has become much more popular and will be discussed in greater detail later. Other complimentary techniques involves ITC which measures entropy and enthalpy to determine their contributions in ligand interaction, and therefore gives a clue as to what sorts of alterations would be required to optimise binding. DSF is also widely used, more often to measure where the hit compound is binding on the target molecule (Domigan et al., 2009).

1.4 Summary and Aims

To summarise, the stages of drug design include target identification and validation; hit identification and generation of the lead molecule; and then the optimisation of the lead into a drug for testing and then clinical trials. Target identification may utilise such structural techniques as structural genomics. Hit identification makes better use of structural information from X-ray crystallography or NMR and the design of drugs and computational in-silico drug design, or screening methods including high throughput screening (HTS), SPR library screens, and in silico computational screening methods. Optimisation generally uses structural information taken from X-ray crystallography as well as computational methods and in some cases NMR aided by SPR, DSF and ITC to increase binding affinity and then the pharmacokinetic properties.

To assess their usefulness in structure-based drug discovery and design, case studies will be analysed to look at how these techniques have been used to further the production of clinically used drugs, or at least increase our understanding so that we may be able to use it in future drug design attempts.

2. Case Studies

2.1 Nuclear Magnetic Resonance in Fragment Based Library Screening and X-ray Crystallography in the Design and Optimisation of Hsp90 Inhibitors

The heat shock protein 90 (Hsp90) is a human chaperone which is involved in stress responses, but is also required in the essential process of client protein maturation. Many of its client proteins are involved in cell signalling, proliferation and growth (Biamonte et al., 2010), which have been associated with a number of different cancers. The overexpression or inappropriate activation of Hsp90 is also therefore associated with cancer, therefore a number of drugs have been produced which aim to inhibit the essential ATPase activity of Hsp90. Hsp90’s combinatorial in so many different client proteins makes it a good target for drug development, therefore many drugs are already available that target Hsp90. However there are a number of problems concerning bioavailability, toxicity and increased resistance and so newer more effective drugs are required (Van Montfort & Workman, 2009).

As can be seen in figure 1 Hsp90 is active as a dimer, and in the N terminal of each subunit is a functionally essential ATPase site (Prodromou et al., 1997), the middle domain regulates the interaction of Hsp90 with its client proteins (Meyer et al., 2003) and the C terminal region is responsible for dimerization (Minami et al., 1994).

Initially drugs for Hsp90 were discovered using the binding and cell based assays, however more recently there have been drugs that have entered clinical trials that were generated using structure-based techniques. These have targeted the ATP binding site essential for function, and so required a good understanding of this site. As can be seen in figure 2 there are critical hydrogen bongs between the adenine of the ATP bound, and the side chain of the amino acid residues Thr184 and Asp93 . These would therefore be ideal targets in the design of an inhibitor molecule (Obermann, 1998).

There are examples where inhibitors have been identified using NMR and X-ray crystallography screening methods of fragment libraries, and as has been described, fragment based libraries generate useful starting hits (Hartshorn et al., 2005). In an NMR fragment based library scan, the displacement of low concentrations of ADP (the product compound of the ATPase domain)was measured using NMRwaterLOGSY (Water-Ligand Observed Via Gradient SpectroscopY) (Dalvit et al., 2001), which indicated when a fragment had bound which could be chosen for further study (Murray et al., 2010).

Murray et al. discovered a number of binding fragments, 2 of which became lead compounds. The first was compound 1 (fig3) which made extensive hydrogen bonding interactions with key residue Asp93 (as seen in figure 2) and a number of water molecules found deep within the binding pocket as can be seen in figure 4a. However, as can be seen in figure 4b the compound 1 doesn’t efficiently fill the lipophilic pocket defined by the residues Met98, Leu107, Val150, Phe138 and Val186, additionally it was found that compound 1 wasn’t particularly stable, as it was twisted about the bond between the pyridine and pyrimidine.

Virtual screening for analogues was used initially to produce more stable forms of compound 1, and though this yielded higher affinity binding molecules, their torsion profiles indicated steric clashes between the methoxy group at position R2 (fig5a) would result in unfavourable binding. Instead using SAR (specific absorption rate) analysis, it was predicted that exchanging the methoxy for chloro improved it significantly, resulting in compound 9, the basis of further optimisation outline as a chemical structure in figure 5b, with the positions for optimisation labelled as R4 and R5. This was done using computer based modelling techniques, and illustrates how useful it can be when enough information regarding the target protein and the current ligands is available. The methoxy and chloro groups added to positions 4 and 5 of the upper phenyl ring increased binding affinity to the lipophilic binding pocket to 12nM.

Once the affinity was increased, cellular activity had to be improved and this was achieved by adding a morpholine group to position 5 outlined in figure 5b, a decision based on the crystal structure, and this resulted in compound 14 which is currently going through clinical trials for the treatment of different cancers. As can be seen in figure 6, compound 14 in blue binds in much the same way as compound 1 in orange, but makes more extensive interactions with the lipophilic pocket via an extended phenyl ring.

The second line of lead compounds Murray et al. followed initiated from compound 3 (fig7), which using their initial NMRwaterLOGSY screening method appeared to bind rather inefficiently. However, upon observation of the X-ray crystal structure of Hsp90 bound to compound 3 (fig8) it was decided that it provided a quick and attractive optimization route. It’s binding with water molecules and one of the key residues Thr184, though on its own provided a relatively weak interaction, if optimised could also make direct interactions with the alternative key residue Asp93, and also with additional endogenous water molecules.

Using trial and error, the writers found that a tetra-butyl group filled the lipophilic pocket appropriately with fewer steric clashes, and this resulted in compound 18, the lead compound that was further optimized to make more effective interactions within the lipophilic pocket. Using modelling studies, interactions with the side chain of residue Lys58 was approved. Compound 24, an isoindoline filled the pocket with a phenyl ring which interacts with residues Ala55, Lys58 and Ile96 completely displacing Lys58 side chain as can be seen in figure 9a.

In other inhibitors a position 2 OH (hydroxyl group) resulted in the greater affinities, however compound 18 had a position 4 OH, and a replacement with an OH at position 2 resulted in a lower affinity compound. Addition of an OH at position 2 to compound 24 as well as the OH at position 4 resulted in compound 31 which enabled interactions directly with Asp93, retaining interactions with Thr184 as well as increasing hydrogen bonding with water molecules as can be seen in figure 9b. It also , illustrates compound 31 in blue binding in much the same way as compound 3, but it fills the lipophilic pocket more efficiently, and makes more extensive interactions. This greatly increased binding affinity and compound 31 is now going through clinical trials.

This helps illustrate the importance of structure-based approaches such as NMR and X-ray crystallography in the identification and optimization of lead compounds, as well as the input computer based methods can have. X-ray structures were particularly helpful in the case of compound 3, as without this structure compound 3 would have been dismissed as an inefficient binding compound. Additionally, all kinetic data which helped support the optimization and validation steps, was obtained using ITC.

Further work on improvements to the pharmacokinetic properties as well as drug-tissue distribution should be concentrated on.

2.2 Crystal Structures from X-ray Crystallography and Nuclear Magnetic Resonance in In-Silico Drug Design, and 3D Drug Development – Human Immunodeficiency Virus

HIV (the human immunodeficiency Virus) is the causative agent of the acquired immune deficiency syndrome (AIDS) and statistics show that by 2005, approximately 38 million people worldwide were living with HIV (Beyrer, 2007). HAART (highly active anti-retroviral therapy) established in the 1990’s makes living with HIV bearable by keeping viraemia low, and CD4+ (cluster of differentiation 4) cells at a high enough level to protect from opportunistic pathogens. However, with increasing resistance and the negative side effects of current drugs, constant improvement and newer drugs are required. The protease inhibitors were revolutionary in HIV treatment, starting with the rationally designed Saquinavirapproved for use in 1995 (Roberts et al., 1990). HIV protease is a good target, essential in the life cycle of the virus, and though Saquinavir was very successful, resistance quickly arose, and so a greater understanding of the protease structure and biochemistry was required. This was necessary not only to try and target residues that would be less likely to result in resistance, but also to improve the pharmacokinetic properties, producing non-peptidic as opposed to peptidic drugs to reduce toxicity and improve half-life (Arung Ghosh et al., 2008).

There have been multiple inhibitors designed with the use of X-ray crystallography, to enter clinical trials and be approved by the FDA (food and drug administration) for use in HAART. It was determined that by targeting the protease backbone residues, it would be possible to generate drugs that would be less likely to result in resistance because mutations are rare, and those that occur do not often distort the overall conformation. Such a site is therefore more conserved and a better drug target (Ghosh et al., 2011).

Saquinavir, though a peptidic drug with poor pharmacokinetic properties did bind the backbone resides (though relatively weakly) it also bound outside of the binding envelope, the region which locates the gag-pol polyprotein for cleavage. Mutations are far more common and tolerated outside of the envelope region. Mutations would therefore not reduce virion viability but would prevent inhibitor binding (King et al., 2004). The development of Aprenavir, with a single-ringed tetrahydrofuran (THF) group was designed using Saquinavir as a scaffold, to generate a related, but non-peptidic cyclic compound that would bind and inhibit the active site of the protease, much in the same way as Saquinavir but with increased half-life, better pharmacokinetic properties, increased backbone binding and a more specific binding to the active site envelope. The chemical structure of Aprenavir as seen in figure 9, binds the S1S2 S1’S2’ binding envelope of the protease, closely interacting with the backbone residues Asp29 and Asp30, as well as many other residues (Kim, 1995). The interactions with Asp29/30 were relatively weak, and the THF group, believed to be involved in increasing favourable enthalpy interactions, if increased in size was thought to be able to improve backbone and hydrophobic interactions with the residues that make up the lipophilic flap.

Using Aprenavir as a scaffold, Darunavir was developed, a bis-THF compound with a double ring, as can be seen in figure 10. This single ringed to double ringed evolution resulted in more extensive interactions with the key backbone residues (Tie et al., 2004) as can be seen in figure 10, as there are far more hydrogen bonds present between the bis-THF complex in pink with the backbone residues than there are between the single ringed THF complex in green.

To measure the ability of Darunavir to withstand mutations in HIV protease, Tie et al. co-crystallised Darunavir with a wild type protease and a mutant version. As can be seen in figure 11, the wild type hydrogen bonds at 4.1 A indicated by the purple dashed lines is retained in the mutant distance of 3.8 A in blue. This suggests that Darunavir is robust, and will continue to be active against resistant strains of HIV.

The inherently high mutation rate of the HIV genome due to the accident prone polymerase means that there will be strains that will become resistant to Darunavir in the future, and it is always necessary to stay one step ahead. Darunavir has thus been used in modelling studies to design optimised structures which are incredibly potent, more so than Darunavir retaining the favourable pharmacokinetic and cellular properties (Ghosh et al., 2011).

Figure 12 details the position of compound 1b in green– a Darunavir like compound in the hydrophobic pocket of the HIV-1 protease, and as can be seen, it makes a number of Van der Waals interactions with residues Ile47, Val32, Il84, Leu76 and Ile50’ which make up the hydrophobic flap as well as hydrogen bonds with Asp30 (3.5A long) and Asp29 (2.9A long). To improve the interaction distance with Asp30’s NH group, Ghosh et al. modelled an increase in phenyl ring size of the P2 ligand in an attempt to also increase flexibility of the structure. This was achieved with the addition of an amide group which also increased the hydrophobic interactions with the lipophilic pocket residues. The pink structure of compound 35a as seen in figure 12 binds in much the same way as compound 1b, but makes more extensive interactions with the key residues and fills out the lipophilic pocket more effectively. This compound was then generated and its Ki and IC50 values calculated to measure it against 1b, it was a far more efficient inhibitor, and thus a potential clinical candidate.

There are many examples of proteins that cannot be crystallised, and to obtain structural information so rather than using X-ray crystallography, NMR can be used. As an example, the HIV protease structure was constructed using NMR (fig 13).An X-ray crystal structures is a static representation of a dynamic system in a relatively unnatural environment, whereas NMR is in solution and is believed to be more biologically relevant, and can in some circumstances be used to observe dynamic protein systems (Zou, 2007). NMR is far more time consuming however, and the inherent flexibility of proteins results in areas of low resolution in structures, more so than with X-ray crystallography.

NMR has been used more successfully in hit identification, as has been discussed in the example of Hsp90 inhibitors.

2.3 The use of Surface Plasmon Resonance, Isothermal Titration Calorimetry and In-Silico Drug Design to Complement Structural Techniques Such as X-Ray Crystallography and Nuclear Magnetic Resonance

As technology improves newer methods have evolved that complement the existing, this includes such techniques as SPR which detects the interactions between the target protein and ligand, used in primary fragment based library screens to identify hits, or secondary screens to identify or validate hits (Retra et al., 2010). As previously discussed, fragment based screening methods can result in attractive starting points for lead optimisation (Erlanson, 2006). SPR can be used in a number of ways, in chemical micro arrays, SPR imaging, secondary screens of hits found through high throughput screens and also in primary biosensor screens.

In primary screens, a biosensor is set up with the target molecules immobilised on chips and this has been successfully used in the identification of hits without the requirement of other forms of structural information (Nordstrom et al., 2008). The hit molecules can then be integrated into lead series and optimised using other structural techniques such X-ray crystallography and NMR to obtain clinical candidates (Huber, 2005).

The matrix metalloproteinases (MMPs) are a group of proteins found in many different species; in humans there are approximately 12 that are involved in tissue remodelling, and degrading extra cellular matrix molecules such as elastin, collagen and laminin (Demedts et al., 2006). MMP-12, involved in various human diseases such as emphysema and chronic obstructive pulmonary disease (COPD) is the target of a number of therapeutic drugs, all of which have harmful side effects and so new drugs are required (Nordstrom et al., 2008). Using SPR and ITC in conjunction with NMR or X-ray crystal structures Nordstrom et al. produced an in silico drug design based on the binding sites identified by the crystal structures, using pharmacophore properties to model a binding molecule. Mutant proteins were designed in silico and then generated, immobilised on chips along with wild type proteins as depicted in figure 14. Molecules designed in silico could then be screened against the different proteins on the chip.

For screening purposes SPR is limited, the number of molecules screened against the biosensor is relatively small as the proteins become degraded; only a couple of hundred molecules can be screened, compared to thousands in HTS. The library must therefore be carefully designed, using in silico modelling, docking and screening, or with a vast knowledge and understanding of the target.

Alternatively, SPR can be used in hit validation for lead series initiation, assessing the enthalpy and kinetics of binding, as was the case for capstatin analogues to increase binding affinity for C3b in the treatment of multiple human disorders involving the over-activation of complement (Qu et al., 2010). C3b is an appropriate target because it is involved in so many disorders such as neurodegenerative, sepsis and has also been linked to stroke. Campstatin is a good peptidic protein inhibitor, binding and inhibiting C3b regardless of the initiation pathway. However, due to its peptidic nature, Campstatin is not very stable with a short half-life in vivo, and due to the low concentrations of C3b found in plasma, a higher affinity compound with better pharmacokinetic properties would be ideal.

N-methylations were analysed at different positions on the Campstatin scaffold and changes in binding affinity measured using SPR, and confirmed using ITC to conclude that by generating a compound that retains a rigid structure both in solution and in a bound state, it would bind with increased enthalpy, without decreasing the entropy as had other previous designs (Qu et al., 2010).

This demonstrates the powerful applications SPR and ITC can have in drug discovery or design, and how in conjunction with in-silico computer based techniques, they can complement X-ray crystallography and NMR techniques.

2.4 The Difficulties Associated with Membrane Protein – The B2 Adrenergic Receptor: an Example of A G-Protein Coupled Receptor

Crystallisation seems to be at the heart of structural biology and even with the option of NMR there are still severe limitations that mean many proteins, particularly membrane bound proteins, cannot be crystallised and thus cannot be visualised as a 3D structure. This is particularly problematic for structure-based drug design, as some 50% of drugs target G-protein coupled receptors (GPCRs) alone, not including the many other families of membrane bound proteins. GPCRs are a superfamily of proteins which all have 7 transmembrane helices found in eukaryotes important in many crucial signalling processes (Lundstrom, 2005).

The problem with studying membrane proteins in general is the difficulties in solubilising them and getting enough protein to work with. To obtain this large amount of protein, recombinant protein is required and for human protein this is a particularly difficult task (Mancia et al., 2007). The lack of structural information limits our understanding of ligand binding, as well as allosteric control and active site location (Summers, 2010). There have been major advances in obtaining the structures of GPCRs recently, with structural information on rhodopsin, A2A adenosine receptor, B1 adrenoreceptor and the B2 adrenergic receptor. The problems to overcome were obtaining enough usable protein, thus an appropriate expression system, the intrinsic flexibility and therefore excessive instability, and obtaining the exact solubilising formula for each protein. Once achieved, the crystallisation process for membrane proteins is no different than for globular proteins (Velipekka et al, 2010).

To stabilise the different GPCRs, mutagenesis was used in rounds for B1 adrenoreceptor (Warne et al., 2008), or in the case of the B2 adrenergic receptor and A2A adenosine receptor, the flexible intracellular loops were stabilised by replacing them with the easily crystallised and inherently stable T4 lysozyme (Rosenbaum et al., 2007).

Therapeutics aimed at A2A adenosine receptor could help in the treatment of seizure, asthma, Parkinson’s, pain and many other neurological problems (Jaakola et al., 2009). The crystal structure of the A2A adenosine receptor with the antagonist ZM241385 enabled the determination of important residues in ligand binding, and thus generated the information required to use computational modelling studies to suggest residues that would be important in inhibitor binding. Figure 15 depicts the binding of the antagonist, hydrogen bonded to Asn253, aromatically stacked against Phe168 as well as hydrophibically interacting with Ile274. An understanding of these interactions greatly helps in the elucidation of therapeutically important binding molecules ( Jaakola et al., 2009).

B2– adrenergic receptors, a class of GPCR are important in smooth muscle related diseases such as asthma (Cherezov et al., 2007). Cherezoc et al. made a B2-adrenergic receptor T4 lysozyme fusion protein to enable crystallisation with Carazolol at 2.4A. Carazaole has high affinity for the receptor, lying adjacent to, and making significant interaction with the residues Phy289, phe290 and Trp286 as seen in figure 16b and reduces basal level of activity of the receptor via its interactions with phe289/290 which result in the inactive trp286 state as seen in figure 16.

This understanding of agonist binding and an in depth knowledge of the residues involved, if expanded upon could increase the possibilities for structure-based drug design and modelling.

3. Conclusion

3.1 Summary of Main Points and Advantages of Structure-Based Techniques

The power of structural biology is apparent; it provides a clear physical picture of the target protein. It enables the identification of hit compounds via X-ray crystallography and more commonly NMR, supported by the complimentary techniques – computational analysis, SPR, ITC and DSF. Such techniques can validate those hit compounds to enter them into lead series and they can then be used to optimise leads to generate clinically usable compounds.

The importance of structural biology is therefore easy to see as it has been successful in generating clinically used drugs, Darunavir for the treatment of HIV as a protease inhibitor being just one of many examples.

3.2 The Limitations of Structure-Based Techniques

Of course they are not without their limitations. X-ray crystal structures are static freeze frame shots of a dynamic system, so we cannot be certain that what we see is biologically relevant or simply artefacts. Both X-ray crystallography and NMR suffer with the inherent instability and flexibility of proteins. There are methods to improve the 3D structures, as seen in the crystallisation of membrane protein – the B2 adrenergic receptor (Rosenbaum et al., 2007), suggesting that these limitations are not permanent, and can be overcome. Many proteins cannot be crystallised, and though there has been recent breakthroughs as with the case of the GPCRs, the vast majority have not been visualised and yet 50% of drugs are aimed at them.

Complementary techniques such as SPR, ITC and DSF have successfully been used to identify hit molecules (Nordstrom et al., 2008) and to validate or optimise leads (Huber, 2005). Unfortunately these too are not without their faults, requiring smaller screening libraries, and the proteins involved to be constantly replaced during screens.

To overcome this there have been computer based in-silico screening and design processes, which under certain circumstances has been used efficiently as was the case with the optimization of Darunavir (Ghosh et al., 2011), however there have been huge limitations. The first human GPCR crystallised, rhodopsin was a model for all GPCRs and in-silico modelling studies utilised it to generate binding molecules, but with the visualisation of the A2A adenosine receptor via X-ray crystal images, it became apparent that this was a far too over-simplified view ( Jaakola et al., 2009).

3.3 Concluding Thoughts and Future Advances

To conclude, there are clear limitations concerning the structure-based design of therapeutic drugs, requiring further advances in technology and understanding to be made before we can easily utilise every form of technology efficiently and in an integrated fashion. Structure-based techniques do not speed up the process of drug discovery, however, there are also clear advances that have been made through the use of such structural biology techniques. They should therefore continue to be used in conjunction with current technologies to ever improve the therapeutics in use.

Future advances should include improved recombinant protein technologies and purification procedures to obtain the large quantities of protein required, improved detergent mixtures for membrane proteins as well as better crystallisation procedures in general to increase resolution. As well as finding hits for lead series of molecules, structural techniques should also focus on increasing the number of targets, so that whole new sets of drugs can be made to add to combinatorial drug therapies such as HAART in the treatment of HIV, in an attempt to overcome the problems of resistance.

Free Essays

Innovations in materials teaching: design of demonstrations for lectures


1. Problem Background

The process of teaching of material science involves memorising a more significant amount of data than any other subject in engineering. Dealing with the torrent of data can become tidieous for alot of students and result in lovered concentrarion and lover levels of data retention. That is also due to the fact that most lectures are designed for people with visual senses as the main developed sense omminitng kinestetisc and providing a lesser quality of data retention for audio centered individuals that often learn and remember data when accompanied by music. In the course of preparation of this project I embarked on a mission to discover alternative ways of presenting of information to appeal to more wide wariety of people, to raise engagement and concentration levels in and off lectures.

1.1 Introduction to Experiments and Reasoning Bechind the Choices

a) Powder Metallurgy and Synthering

For the experiment showing the procedures and technologies used in manufacturing of sinthered products I decided to use a novelty PMC techology that allowes a low level of temperature ant preasure to manufacture metal sinthered products. Those qualities were the main reason for this choice of the experiment as it offered a possibility to convey the proces in lecture theater.

b) Composites
c) Coatings

As the importance of coatings and their omnipreence is a obvious fact for many engineers in already in the industry, it many times is overlooked by students or its importance diminished by “fancy” or egsotic materials, I decided to recreate an experiment that I have seen previously conducted. The fluidisation method is a simple way of introducing students to problems and chalenges as well as fascinating technologies and possibilities bechind coating technologies.

d) Ultrasonic Testing

Non destructive testing is a huge part of the industry, especialy in the heavy area such as ship building or heavy mashinery priduction in which often after initial testing and process preparation procedure the quality testing can be at times imposible till after a massive amout of work was already made and prototyping is not an opcion. Non destructive testing can verify quicly if the choice of techology or procedure was correct whiteout the need of producing and destroing a large amount of prototypes. The main use is testing either a finished product pre assembly or, and its indisposible in those areas, to chect existing equipmet or instalations for developing faults or to investigate reasons for failures. I have chosen to present a proposal for an excercise using a Ultrasonic wave emmiting probe to test for flaws in the homogenity of materials.

e) Hardness Testing

Hardness testing is another pivotal cog in the industry machne, it resposible for consistency of the quality of most produced materials and processes used. In my experimental excercise I chose to use the Vickers scale as it alloves to test all types of material, it has one of the widest scales out of all hardeness tests and all calculation are independent of the size of the indenting probe.

2. Powder Metallurgy and the Precious Metal Clay (PMC) Method

2.1 Introduction

“Powder mettalurgy principle of shaping metallic objects, without melting from powdered mateials can be traced back… ancient Egiptian iron implants which date from at least 3000BC” (Indian Journal of History of Science, 18(1) 109-114(1983)) It was the technology of metal powders especially tungsten carbide machining tools that allowed Nazi Germany to mass produce tanks shielded in high strenght steel alloys and unprecedented bores of their guns that were of such an adwantage on WW2 battlefields.Generaly the domain of Powdered Metallurgy ecompases the production of powdered materials, being it trough chemical or mechanical processes and creating a usefull geometries from those materials by applying preassure and introducing heat. The method is applied with success to ceramics, composite materials with nonmetalic and metali phases and polimers both natural and of petrochemical origin. Powder Metallurgy and Synthering are a groving and a dynamic disciplines of Material Science, new methods like Selective Laser Micro Synthering (Journal of Physics Conference Series, volume 276 ISSN: 1742-6588) Spark Plasma Sinthering and Hot Isostatic Pressing ( JOURNAL OF ALLOYS AND COMPOUNDS Volume: 504 Pages: S323-S327 ISSN: 0925-8388) are being constantly developed and the influnce of that type of production is growing. Many dont appricieate the significance of Powder Technologies as the “workhorse” of hightech industry especially in tearms of manufacturing processes that involve high hardness materials mashining, mainly metals, that would not be possible whiteout the ultrahard mashining tools made from powders. Powder technologies allowe for a more uniform or homogenous structure of the material and significantly higher level of influence on the composition of the produced material or detail. Also due to the specifisc of the process that frequently reseamble plastic moulding in tearms of the actual technology and even simulations of the process (Powder Injection Molding (INTERNATIONAL JOURNAL OF POWDER METALLURGY Volume: 46 ISSN: 0888-7462) , a multitude of geometries can be crated from materials that previously would require mashining and by such would be limited in the scope of shapes that could be created, thus Powder Technologies solve a multitude of technological problems. The most recent development in Powder Technoligies has been seen in the sector of 3D printing where a multitude of materials are used in powdered form starting from various plastics to metals or compounds of metals like aluminium. Also nanotechnology of the new century is considered a part of Powder Metallurgy ( Technology is indispensable in the production of porous materials used, but not exclusively, in oil and chemical industries as filters and oter parts of their production processes. The fact of ease of use and high degree of controll of porosity and ad-mayorem significant controll over the grain size/growth that alowes for pores of microscopic size to even 100mm( to be uniformly distributed and created makes the technology indispensable in the creation of self lubricating porous bearings that can be seen as omnipresent and taken for granted, also used in sophisticated machines of nuclear or aeronautical industries.

2.2 Description of different metal clay products

Metal Clay is a powder metallurgy derivative that incorporates all standard Powder procedures but with a much reduced technical requirements. As with normal Syntering process heat and preasure is applied but to a significantly lower intensity, which allowes the technology to be used by craft jewelers, artists and enhusiasts. The clay consists of finely powdered metal and an organic non toxic binder, it is sold ready to use in sealed packaging that keeps the material in required humidity and consistency, alloving for it to be imediatly worked on. In the method varoius mterials are used, platinium, gold, silver, bronze, copper or even steel. There are two main types of Metal Clay on the market:

a) Precoius Metal Clay (PMC)

The first to be developed in Japan in 1990 by Masaki Morikawa ( U.S. Patent 5,328,775) working for the Mitsubishi Materials Corporation. Initially the technology was developed using solid-phase sintered gold and later devoped to use silver. The initial formula called PMC standard had a significant drawback caused by the necessity of firing of details in kilns and required a temperature of 900C for binding to occure, also a huge 30% shrinkage made it difficult to create fitting jewelery pieces. The limitations resulted in the development of PMC+ that could be fired at 810C with a shrinkage in the region of 15% and also allowed for detail to be fired using a hand held torch. The last developed version was PMC3 that had the same shrinkage as PMC+ and also could be fired using a handheld torch but it lovered the required sinthering temperature to 599C( Mitsubishi manufactures also platinium and gold varieties of PMC but those are not obtaineable outside Japan.

b) Art Clay Silver (ACS)

Developed in japan by AIDA Chemical Industries. In comparison to PMC the technology developed by AIDA has a significantly lower shrinkage in the region of 8-10% and is readily availeable as gold clay, silver clay, bronze clay, copper clay. The standard range requires a synthering temperature of 800C. The company developed a range of slow dry clays that allowe for a prolonged working time whiteout the loos of plasticity or cracking. The lower synthering temperature Art Clay 650 is availeable in slowdry and standard versions and can be fired in 650C for 30 minutes or at 780C in just 5minutes ( The company developed supplementary products that come in the forms of syringe clay, paste, overlay paste, oil paste, paper clay and gold foil. The company just recently developed Art Copper Clay that can be fired using a Kiln or a torch whiteout the neccesity of using baths of activated carbon to protect the material from oxidation.

2.3 Detailed description of the PMC method based on the first patent (us patent 5.328.775)

After an extensive research focused on obtaining a product that would after completion contain no byproducts or residues of the binder, the development came to a fruition when by addin water to a cellulose powder reaserchers created a “jellylike cellulose”. As both components are removed from the final product by evaporation in case of water and during the high temperature synthering cellulose burns out leaving a silver detail of .999 ( purity. To prevent the mixture from adhearing to surfaces or modellers hands an additive of di-n-butyl phthalate is mixed into the compound, this additive is also removed during the synthering phase of the process.

Ammounts of components, specific properties of the mixture and the justification of their those qualities:

Cellulose is a polisacharide consisting of a varied lenght chains of glucose. The main justificatoin of the use of cellulose as a binder is its non-toxisity and the assurance of total dispertion during the synthering process ad cellulose while burning reduces to CO2 and water. In the solution ethyl and methyl cellulose is used. The cellulose is to be mixed with water to create the jelly substance in proportions from 5/95 to 30/70 parts of cellulose/water. The solution should be mixed with the sinthered metal in the region of 0.8-8% of total mass. As the main property of the cellulose used is the quality of gelling while heated it was discovered that quantity of less than 0.8% negates that quality while a mixture containing more than 8% of the gelly cellulose would have a much lower viscosity to the extent that would not allow the mold to hold shape.
Non stick and surface-active additives
Surface-active additive is a substance that brakesdown solid waste of the reaction of the cellulose with water thus, it should be added in a region of 0.03-3% of the celulose mixture mass in case of the surface-active agent. The boudaries are justified by the facts that a solutions containing less than 0.03% of a surface active additive would not benefit from the qualities that the additive would grant and solutions with 3% and more would develop a high viscosity that would not allow easy molding. The patent lists “alkyl benzene sodium sulfonate” and “polysoap” as preffered substances. For adherence prevention an agent like oil ot fat should be added. The regions required are 0.1% to 3% of tottal mass respectivelly as quantities below the required treashold would not yeald the desired qualities and if the content would be higher than the required the mixture would become oily and handling would be gratly impared. The Patent lists hihger organic acids and esters like “phthalic acid” and di-n-butyl phthalate. Also higher alcohols can be used as adherence prevention agents.
Precious Metal Powders (PMP)
PMP is produced using “gas atomising” or “water atomising” processes explained in detail further, due to passive chemical qualities of preciuos metals the process of submerged reduction cannot be used except the production of gold powders. PMP can be used as pure single metal like in the case of silver used in the presentation or by alloying, mainly used in the production of 18k gold clay. The alloys are used to obtain specific colors, mechanical and synthering qualities. Addition of copper for instance results in a red tint of golden product. PMP were found to be usefull when mixed with the binder in the region of 50-90% of tottal mass. The justificaton behind that derrived from the experiments by the Mitsubishi team is that if the content would be below 50% the synthering process would not occure, while contents of 90% and above would decline the plasticity and strenght of the mixture to the extent that would not allow modeling. Another important factor is the grain size. It was discovered that an average grain size must be smaller than 200um as the result of higher grain size would result in simmilar unvanted qualities as when using more than 90% of PMP.

2.4 Manufacture technologies of Metal Powders

Atomisation – a process of producing metal powders on a comercial mass scale widely regarded as the most effective for producing large amounts of powdered materials. The process starts by melting of the metal in a induction furnace (other types of furnaces can be used but induction furnace has proven to be the most effective) . Depending on the different setups and wether a constant flow can be acheaved or a “batch” production is required the liquid metal is fed trough a tundish that controls the steady flow of metal to the atomising vessel as a steady stream or as a dispersion trough a nozzle. Then the stream of liquid metal is bombarded by the atomising gas or liquid and the stream or dispersion is further dissipated into a fine powder.

2.5 Sinthering

As the process is constantly developed the actual definition can be argued as not fit but general consensus in literature is that sintering is, a process used to create density controled products using powders and application of thermal energy and preasure, where adheresion occures below the melting point of the processed material. It can be separated into two main categories Solid-state Sintering and Liquid state Sintering there. Other types of syntering can be distinguished like “transient liquid phase sintering”, “viscous flow sintering”. The relation between different subgoups of the process is visualised in FigDHFJHFH

FigASGAR. Sintering grouping in alloys Souce(Sintering : densification, grain growth, and microstructure – Kang, S.-J. L. (Suk-Joong L.))

2.6 The Experiment

3. Kompozyty??

4. Coating Technology and Fluidic Coating of Steel

4.1 Introduction

In the technology of antycorosive protection, plastics find a multitude of uses mainly for economical reasons. For metal corrosive protection we use plastics in multiple forms, such as. Sheets, laminates (applied directly on the protected surface), pastes (coating by the use of gas flame diffusion or melting of coatings in high temperatures), powders, foiles. The most popular methods of powder coating are: fluidisation and electrostatic, in a much smaller degree flame or flameless spraying. IN all those methods the powder is melted on the surface of the coated object to create the coating. The process is taking placein atmospheric preasure and heat is delivered to heat the object before the coating proces in the fluidisation method or after the powder was applied as in the electrostatic method.

4.2 Coating meterials

Coating materials ca by used in the form of pasts dyspertions, liquids, plastic and other material mixtures not containing a solvent. Thus the materials applied create a surface with a specific thickness, bonded thightly and displaying a specific set of phisicial, mechanical and chemical properties. For the creation of coatings the substance used must therefore fulfill those needs:

a) From the lquid state its possible to create using simple phisical or chemical process a solid state.

b) In liquid state they must display the ability to moisturise the surface, should have a specific process and time of solidification (gelling, hardenning, drying) consistency and viscosity and the abitily to adhere.

c) In solid state thye must be able to adhere to the surface, kohesion, elasticity, hardness, resistance for ageing and environmental resiatance.

The main componenet of coating materials are themnoplastics and plastics that solidify in higher temperatures. In such materials stabilisators might occure as well as pigments or plastifiers.

4.3 Methods of preparation of the coated surface.

Adeqate surface preparation is pivotal to the performance of any coating. Surface of objects, on which the coating can be applied, should be preemtively prepared. Preparation consists of cleaning off all contaminants and reducing smoothness as adhesion accures much better on rough surfaces. The surface must be cleaned of all contaminants like millscale, rust, all salts, preproduction compounds by the use of hand or power tools, greases and oils in the proces compoundly called degreasing. The process involves wiping, which must be done immacualtly as the later solvent bath would spread the contaminant in a fine fil rather than remove minimal amounts. Baths can be made in varoius forms, ranging from steaming to emulsions and solid compound baths, in the case of the samples provided ammonia was used. Blast cleaning processes like “sand blasting” or various forms of “hydroblasting” are due to theyr affectiveness and economy videly used in the industry, also the fact of the bility of preparation of a detail fully for production using one process is a great bennefit to. Respectively, the profile of the surface is to be “cleand”, to reduce the possibility of corrosion starting in a“seeded” maner, in a part of unsatisfactory geometry that would empede the adheerence of the coating.

The preparation of the surface must be conducted accordingly to ISO 8504:1992(E)

4.4 Applying the coating using the fluidisation method.

The flow of the process was shown on the table XXY. The powder is contained in a wessel with a porous botom. If a steady flow of compressed air is introduced at a certain moment an expantion of the load occures. It reaches the point of loosening and the particles would start to move and flow between each other. The fluidisation process is dependent on the ability to create a solution of solid bodies in a stream of gas.

4.5 The interferances of fluidisation

The basic interferances are:


Where the fase of diffrent viscosities can be easily distinguished


A fault in process where baubles of air are visible


Where the susspension is separated into layers


Where the air creates streaks in the suspension

4.6 Mechanism of the creation of the coating

The creation of the coat on metal surface by using the fluidising method is the result of the contact of the plastics with the heated surface of the metal object. In a steady stream of particles the proces would occur till a substantial decrease in temperature. Untill the heat of the object will not be sufficient to melt and adhere any more particles.

In the process of the creation of a coat in a fluidal medium we can distinguish 3 stages:

a)A creation of a surface of single particles that are being melted due to the direct contact with the heated surface

b) Growth of the thickness of the coating due to the melting process of melting occuring on the point of contact of the particles with already melted coat. The factor of growth in that fase is the ability to convect the heat trough the melted coating.

c) Halt in the growth of the thickness of the surface due to the loss of heat of the object and a high thermal resistance of the plastic.

The flow of the temperature change in the coat is displayed on the FigX where t1 represents the temperature of the coatant, t2 the temperature of the heated surface and t3 the melting piont of the compound. While the temperature is t2>t>t3 the growth of the coating occures. The growth is sharpelly halted when t falls below t3.

4.7 Aparatus and equipment used to apply coatings

The aparatus used in the excerscise is a double botom vessle with a solid and porous bottoms, trough which the compressed air is being pumped trough. It is posible to use a passive gas like nitrogen or CO2 to negate the harmfull influenco of the contact between the heated surface and oxigen, as corrosion occures much faster in heightened temperature. Due to practical and economica reasons the most frequently used fluidising gas is compressed air from various sources.

4.8 Flaws and defects of coatings

-change of collor – occures as a result of overheating

5. Ultrasonic Testing

5.1 Hardness Testing


Free Essays

Structure–Based Design and Synthesis of 2-Benzylidene-benzofuran-3-ones as Flavopiridol Mimics


Flavopiridol, a well-established inhibitor of cyclin-dependent kinases (CDK’s), is currently undergoing clinical trials. The inhibition of CDK’s, which are involved in the cell division cycle, is a vital goal in anticancer agents and therefore having potent drugs which can selectively inhibit these is crucial. One of the aims is to synthesize 2-benzylidene-benzofuran-3-ones so that they are more potent at inhibiting some of the CDK’s but also selective in nature, something which flavopiridol lacks. To date falvopiridol has been found to inhibit CDK’s 1, 2 and 4 all with the same potency.

Flavopiridol acts on the CDK2 enzyme by mimicking the actions of the purine group of ATP. It is the keto and hydroxyl groups of the compound which form the same bidendate hydrogen bonds with the backbone of the CDK2 residues as the nitrogen atoms of the purine functional group of ATP. Having obtained this information it was clear to see that some variations of the benzafuranone structure would be able to mimic the interactions that flavopiridol has with CDK’s. The main objective of testing structural analogues was to obtain new CDK inhibitors that are more selective in discriminating between CDK2 and CDK4. Different substituents were attached to the phenyl ring, with modelling suggesting that a hydrogen bond acceptor group in para position would interact favourably whereas a bulky and positively charged para substituent would have a detrimental effect on CDK2 inhibition but not for CDK4.

Experimental Procedure

The derivatives of benzafuranone are synthesized in a set of continuous reactions starting with the acid-catalysed condensation of dimethoxy phenol and 1-methyl-4-piperidone in acetic acid. This produces an unsaturated derivative at 62% yield which is then followed by hydrogenation and treatment with chloroacetyl chloride and aluminium chloride to produce a derivative at 50% yield. The compound is then condensed with 4-bromobenzaldehyde to produce a derivative at 41% yield. Finally the aryl bromide derivative is reacted with 1-methylpiperazine and pyridinium hydrochloride to produce a benzofuranone with one R group being the methylpiperazine and the other being a hydrogen group at a 40% yield (8). The compound 8 and 4 others with different R groups were tested in kinase inhibition assays to see their effect on CDK’s 1, 2 and 4.


The derivative with both R groups being hydrogen’s (4), proved to be as potent as flavopiridol at inhibiting CDK1 but not as effective against CDK2 and CDK4. The variation of potency of the derivative shows that there was a difference in the sensitivity between the enzymes. Moreover it proved that the synthesized benzofuranone derivatives whilst being potent maintained the high selectivity which was desired. When a chlorine atom was added at the ortho position on the phenyl ring the potency generally went down against all the CDK enzymes. The reason for this is that due to the presence of the chlorine there is some steric hinderence preventing the favourable conformation being adopted. Two other compounds which contain SO2NH2 and NO2 respectively as their R groups proved to be more potent than the first derivative in inhibiting CDK1 and CDK2. Both compounds contained hydrogen bond acceptor groups in their para positions and so this acted favourably in terms of conserving lysine residue in the enzyme. The reason for a lower potency against CDK4 was due to the lack of conservation of lysine. The slight increase when a sulphonamide group was present as opposed to a nitro group is the result of its additional capacity to donate hydrogen bonds. Compound 8 was again more potent than the derivative with two hydrogen groups as the side chains. This was more prominent in the inhibition of CDK1 as there seemed to be a greater amount of repulsive interaction with the lysine at the 89th position. There wasn’t any noticeable difference in the potency between the compound 8 and 4 due to the ability of compound 8 being able to access the free solvent space due to the replacement of the lysine (at the 89th position) by a threonine.


The compounds obtained from the generalised structure of 2-benzylidene-4,6-dihydroxy-7-benzofuran-3-ones showed inhibitory characteristics which are true of flavopiridol mimetics. Through careful manipulation of the structure it was possible to increase the inhibitory potency against the CDK1 and CDK2 enzymes and obtain selectivity against CDK4 due to the lack of conservation of lysine. However it was often difficult to increase the inhibition at CDK4 due to shape of the ATP binding site which prevented favourable interactions between the ligand occurring.

Free Essays

An Examination of the Requirements of the Construction (Design and Management) Regulations 2007


This study focuses on the key requirements of Construction (Design and Management) 2007 (CDM 2007) and on its practical application. In particular, the demands and challenges faced by small and medium-sized enterprises (SMEs) are explored, in order to gain a clear understanding of how successful implementation may be affected by scale of operation, if at all.

The way in which SMEs operate generally is considered, as is the impact that CDM 2007 makes on their infrastructure, because of its legal and operational requirements. The experiences of the implementation of CDM 2007 by larger corporations provide indications as to whether some aspects of complying with CDM may cause difficulties for smaller-scale organisations. Acknowledged successes and existing difficulties are identified, and suggestions made as to the possible further development of CDM 2007.

1.0 Introduction
Theme of the Study

This study focuses on the key requirements of Construction (Design and Management) (CDM) 2007 and on its practical application. In particular, the demands and challenges faced by small and medium-sized enterprises (SMEs) are explored, in order to gain a clear understanding of how successful implementation may be affected by scale of operation, if at all.

The way in which SMEs operate generally is considered, as is the impact that CDM 2007 makes on their infrastructure, because of its legal and operational requirements. The experiences of the implementation of CDM 2007 by larger corporations provide indications as to whether some aspects of complying with CDM may cause difficulties for smaller-scale organisations. Acknowledged successes and existing difficulties are identified, and suggestions made as to the possible further development of CDM 2007.

Definition and Description of SMEs

From 1st January 2005, the European Commission (Enterprise and Industry) has defined micro, small and medium-sized enterprises according to certain criteria:

“In addition to the staff headcount ceiling, an enterprise qualifies as an SME if it meets either the turnover ceiling or the balance sheet ceiling, but not necessarily both.” (European Commission, 2011).

Enterprise CategoryHeadcountTurnoverorBalance Sheet Total
medium-sized< 250? ˆ 50 million? ˆ 43 million small< 50? ˆ 10 million? ˆ 10 million micro< 10? ˆ 2 million? ˆ 2 million

It was important for the European Commission to lay down guidelines for SMEs that could be applied throughout the European Community as there are certain policies in effect within the European Economic Area that are beneficial to SMEs, such as exemption from some of the rules and regulations affecting large corporations. Statistically these smaller organisations constitute the vast majority of all European Union (EU) businesses:

“Micro, small and medium-sized enterprises are socially and economically important, since they represent 99 % of all enterprises in the EU. They provide around 90 million jobs and contribute to entrepreneurship and innovation. However, SMEs face particular difficulties, which the EU and national legislation try to address by granting them various advantages. The application of a common definition by the Commission, Member States, the EIB [European Investment Bank] and the EIF [European Investment Fund] ensures consistency and effectiveness of those policies targeting SMEs and, therefore, limits the risk of distortions of competition in the Single Market.” (European Commission, 2011).

The EU recognises that SMEs are a vital part of the economy and also that they face specific difficulties. It has introduced some schemes to provide extra assistance for smaller businesses, including access to Structural Funds and the Framework Programme for Research and Development, as well as exemption from some of the restriction attached to State Aid.

Summary of CDM 2007

The United Kingdom’s Health and Safety Executive (HSE) publishes a summary of the requirements of CDM 2007 on its website []. The rules came into effect on 6th April 2007 and are legal requirements. They include those that must be adhered to for all construction projects, excluding projects commissioned by domestic clients (Part 2), and those that apply to ‘notifiable projects’ (Part 3). Notifiable projects are defined as:

“… those lasting more that 30 days or involving more than 500 person days of construction work.” (HSE Legal Requirements, 2011).

The CDM 2007 requirements for all construction projects are as follows:

Clients (excluding domestic clients) must:Check competence and resources of all appointees
Ensure there are suitable management arrangements for the project welfare facilities
Allow sufficient time and resources for all stages
Provide pre-construction information to designers and contractors
Designers must:Eliminate hazards and reduce risks during design
Provide information about remaining risks
Contractors must:Plan, manage and monitor own work and that of workers
Check competence of all their appointees and workers
Train own employees
Provide information to their workers
Comply with the specific requirements in Part 4 of the Regulations
Ensure there are adequate welfare facilities
All workers concerned with a project must:Check own competence
Co-operate with others and co-ordinate work so as to ensure the health and safety of construction workers and others who may be affected by the work
Report obvious risks

Summary: the designer must ensure risks and hazards are eliminated at design stage, before construction work starts; contractors must check competences, provide training and comply with Regulations (Part 4), which contains practical requirements that apply to all construction sites; obvious risks, must be reported.

The CDM 2007 requirements for ‘notifiable projects’ are as follows:

Clients (excluding domestic clients) must:Appoint CDM co-ordinator*
Appoint principal contractor*
Make sure that the construction phase does not start unless there are suitable welfare facilities and a construction phase plan is in place.
Provide information relating to the health and safety file to the CDM co-ordinator
(* There must be a CDM co-ordinator and principal contractor until the end of the construction phase)
CDM co-ordinators must:Advise and assist the client with his/her duties
Notify HSE
Co-ordinate health and safety aspects of design work and co-operate with others involved with the project
Facilitate good communication between client, designers and contractors
Liaise with principal contractor regarding ongoing design
Identify, collect and pass on pre-construction information
Prepare/update health and safety file
Designers must:Check client is aware of duties and CDM co-ordinator has been appointed
Provide any information needed for the health and safety file
Principal Contractors must:Plan, manage and monitor construction phase in liaison with contractor
Prepare, develop and implement a written plan and site rules (Initial plan completed before the construction phase begins)
Give contractors relevant parts of the plan
Make sure suitable welfare facilities are provided from the start and maintained throughout the construction phase
Check competence of all appointees
Ensure all workers have site inductions and any further information and training needed for the work
Consult with the workers
Liaise with CDM co-ordinator regarding ongoing design
Secure the site
Contractors must:Check client is aware of duties and a CDM co-ordinator has been appointed and HSE notified before starting work
Co-operate with principal contractor in planning and managing work, including reasonable directions and site rules
Provide details to the principal contractor of any contractor whom he engages in connection with carrying out the work
Provide any information needed for the health and safety file
Inform principal contractor of problems with the plan
Inform principal contractor of reportable accidents, diseases and dangerous occurrences
All workers concerned with a project must:Check own competence
Co-operate with others and co-ordinate work so as to ensure the health and safety of construction workers and others who may be affected by the work
Report obvious risks

Summary: requirements are considerably more onerous; the CDM co-ordinator must look after health and safety concerns, and welfare facilities must be provided from the start and throughout the construction phase; a written plan and site rules must be in place; obvious risks must be reported.

What the study seeks to achieve

The research focuses on determining whether the key requirements of CDM 2007 are viable for all SMEs, or whether the steps required for its practical application constitute a burden to some. If the latter is the case, the reasons why this is so evidence of the difficulties faced by SMEs will be recorded and analysed. Further, the potential for making improvements to current requirements can be identified and described.

How the dissertation is structured

The Introductory chapter (Chapter 1) outlines the theme of the study and what it seeks to achieve; introduces the concepts of CDM 2007 and SMEs, and describes their parameters. The Literature Review (Chapter 2) examines how the responsibilities of CDM 2007 are divided between participants in construction projects, and how monitoring and evaluation takes place. It explains why CDM is needed and what the health and safety implications are. Key areas of research undertaken thus far are summarised and the research question is explained. The Methodology section (Chapter 3) details the secondary research and the advantages and disadvantages of CDM 2007 for SMEs. The limitations of the research are explained. Research findings are discussed (Chapter 4); conclusions are drawn and recommendations made (Chapter 5).


What the literature review considers

A number of research projects have been carried out into the reasons for injuries on construction sites and the impact of CDM 2007, and a number of examinations have been made into the reasons why there are deaths among construction workers engaged in building projects. One key document in this latter respect is the report by Rita Donaghy (2009) to the Secretary of State for Work and Pensions: One Death is too Many; her report looked at the underlying causes of construction fatal accidents. The HSE’s evaluation of CDM 2007 is studied for the feedback it provides from 565 construction professionals.

CDM (2007): who carries responsibilities?

All Workers

All workers on construction projects are required to ensure their personal skills and competences are sufficient to the job they are doing. They must work collaboratively as a team, ensuring that the health and safety of all workers is never compromised. All obvious risks must be reported. In all cases, clients, designers and contractors must contribute relevant information to the health and safety file. None of this is particularly onerous or surprising – indeed it represents best practice in any place of work.


Clients on non-domestic projects have responsibilities to choose contractors wisely, by checking the competence and resources of all appointees. They must also check that suitable facilities are in place to cater for staff welfare – for example, toilet facilities, washing facilities and adequate refreshment facilities. For notifiable projects these must be in place before building works start on site. Clients must not pressurise contractors or construction workers to undertake work to unrealistic deadlines or with insufficient or sub-standard resources; instead they must ensure that designers and contractors have received all the relevant information pre-construction. For ‘notifiable contracts’ of more than 30 days’ duration the client must appoint a CDM co-ordinator and a Principal Contractor, and must also provide health and safety information to the former.


For standard, short contracts, designers must check the client is aware of their duties and have reduced risks and eliminated hazards at the design stage. For notifiable projects, designers must check that a CDM co-ordinator has been appointed and provide any relevant safety information.


For standard, short contracts, contractors are obliged to train and manage their own workforce, as would be expected, and to ensure welfare facilities are available. The practical requirements they have to observe, detailed in Part 4 of the Regulations, are comprehensive, including duties that relate to health and safety on construction sites. Part 4 includes instructions covering items such as how to ensure a site is safe and that structures are stable; handling dismantling and demolition; site security; managing the use of explosives; conducting excavations; dealing with water and energy supplies; vehicles and routes for traffic; preventing, detecting and fighting fires; emergency procedures; temperature control, fresh air and protection from the weather; lighting and filing reports (The Construction (Design and Management) Regulations, 2007).

For notifiable projects there are responsibilities assigned to both the Principal Contractor and other contractors. The former must instigate written plans for the construction work to be distributed to contractors, carry out inductions and liaise with the CDM co-ordinator. Other contractors must ensure HSE has been informed before they start work, and co-operate and liaise with the principal contractor and the CDM co-ordinator. Contractors have to indicate any problem areas that they perceive with the planned design and report accidents, staff illnesses or any dangerous conditions to the Principal Contractor.

CDM Co-ordinators

For notifiable projects only, the CDM Co-ordinator has the key health and safety role. They are guardians of the client – assisting and advising them of the responsibilities pertaining to their role; they must also notify HSE and co-ordinate all the safety aspects of the implementation of the design work in co-operation with others. CDM Co-ordinators are communicators who liaise between contractors, designers and the client. It is their responsibility to prepare and/or update the health and safety file.

How Well is CDM Managed and Policed?

It can be seen that Principal Contractors (or contractors on smaller, shorter contracts) theoretically carry the main burden of responsibility for what happens on site – although designers also have health and safety responsibilities, jointly with the contractor (Morrow, 2010). In her examination of the industry Rita Donaghy remarked:

‘The responsibility for safety already lies clearly with the contractor and this responsibility needs to be further clarified in order to raise standards and assist the courts when considering alleged breaches of health and safety’ (Donaghy, 2009, p. 11, Section 19).

One of the difficulties faced by the industry in the light of the safeguards apparently built in to CDM 2007, is how and why fatal construction accidents are still happening, a fact that Donaghy describes as the ‘regular toll of fatalities’ (p 11, Section 16). She notes:

‘It is a disgrace that we have such a low level of reporting serious accidents, let alone near-misses’ (p. 16, Section 40).

A further concern is the ‘built-in delays in the system leading to prosecution and conviction or other outcomes on construction fatal accidents’ (Donaghy, 2009, p. 14, Section 33). Prior to CDM 2007, the HSE had produced a report on the causes of construction accidents, which concluded that:

‘… achieving a sustained improvement in safety in the industry will require concerted efforts directed at all levels in the influence hierarchy’ (HSE, 2003, Summary).

However Donaghy’s definitive report, coming two years after the introduction of CDM, plus the ongoing fact of construction accidents, makes it clear that CDM may not be being either well managed or adequately policed in some sections of the construction industry.

Health and Safety Issues

The long list of areas where contractors have duties and responsibilities (Section 2.2 Contractors above) gives a clear indication of some of the kinds of accidents that can occur on a construction site, which is why the provisions outlined in CDM 2007 are needed. There are accidents recorded in the UK virtually on a daily basis – the more horrific ones make it to the headlines, for example:

‘Company fined after workers engulfed in electrical fireball’, Construction News, 9th December 2011

‘Plasterer in coma after 6ft ladder fall’, Construction Enquirer, 9th December 2011

‘Four die in accident at engineering firm’, The Telegraph, 22nd January 2011

Amongst those workers deemed to most vulnerable on construction sites are migrant workers, young workers – perhaps apprentices or casual labourers – and also more experienced workers aged 55–60 years who might be inclined to take more risks (Donaghy, 2009).

Summary of research into effects/implications so far

Donaghy (2009) cites a long list of studies considered as background research for her report (pp. 90–94) including documents prepared by HSE. Studies have been undertaken by independent academics, for example Muddiman, A. (2001) and Deakin and Koukiadaki (2007). Work has been carried out by trades unions, such as UCATT (2006) and the TUC (2007). All of this research work has focused on the health and safety issues in construction workplaces. Some have investigated vulnerable workers, such as the Department for Business Enterprise & Regulatory Reform (BERR) (2008) and others have specifically studied migrant workers, including Irwin Mitchell Solicitors and the Centre for Corporate Accountability (2007).

Despite the breadth of the research and the abundance of safety regulations that apply to the construction workplace (Donaghy, 2009, p.95 gives examples but does not provide an exhaustive list) the key impression gained from participants in Donaghy’s research is that regulations are not always followed or enforced.

‘Many stakeholders, particularly trade unions, some academics and bereaved families, feel strongly that self-employment, whether genuine or bogus, adds to the risk in the industry because self-employment is such a high proportion of the total. In London it is approaching 90% … There seems to be more agreement that “the problem is particularly acute in the South and London where self-employment constitutes 89% of firms and migrants form 42% of the workforce”.’ (Donaghy, 2009, pp. 34–35).

Several research studies have looked into the effectiveness of CDM 2007, including a pilot evaluation conducted by HSE:

‘The pilot evaluation showed that there are positive signs in terms of CDM 2007 meeting its objectives, with evidence of three being met and two being partially met. However, some respondents have concerns about the effectiveness of CDM 2007 in: Minimising bureaucracy; Bringing about integrated teams; Bringing about better communications and information flow between project team members; and Better competence checks by organisations who appoint other duty holders’ (HSE, 2011).

In March 2010 Pye Tait Consulting carried out research into CDM 2007 from the client’s perspective (Pye Tait Consulting, 2010) and Susan Morrow presented a paper at an international conference in Paris about the impact of CDM 2007 from a designer’s point of view (Morrow, 2010).

Origins of the research question

The findings from Donaghy’s study, the HSE and the work of others, indicate that safety failures might be more inclined to occur in smaller firms where self-employment is the norm. This raises the important question as to whether, for all that CDM 2007 is important and should be supported at every level, it is meeting the needs of all SMEs – particularly those at the smallest end of the scale – and, if not, how it could be improved?

Secondary research

The principal documents that have been consulted in this assessment are the HSE’s RR845 – Evaluation of Construction (Design and Management) Regulations 2007: Pilot study (2011) and Rita Donaghy’s One Death is too Many (2009). Two studies were consulted for details and perspectives from specific professionals: Pye Tait Consulting’s CDM – The Client Voice (2010) and Susan Morrow’s Does CDM enhance the designer’s role to safer and healthier construction(2010).

The findings of Ben Williams’ dissertation The CDM Regulations 2007: the cost of health and safety management for small and medium enterprises in the South East (2010) was consulted in order to understand some of the cost implications of CDM 2007 for SMEs, albeit in a specific geographic area.

Advantages and disadvantages of CDM

From the studies already conducted it emerged that some of the HSE’s own evaluation criteria for CDM 2007 had been met, but not all. The joint responsibility of designers and contractors is an advantage of CDM 2007, in that it encourages good communication and co-operation, as there is joint liability. CDM 2007 works well for architects and engineers where they have a high level of training or experience in construction and where the Quality CDM Co-ordinator is involved at an early stage. Where this is not the case, however, designers may find it difficult to carry out comprehensive risk assessments and the amount of paperwork required by CDM 2007 can put a strain on the relationship between time (costs) and safety (Gambatese et al, 2009).

Project teams are encouraged by CDM 2007 to be competent and to work collaboratively, and this can only be good for safety. It also advocates the collection of data on safety and ongoing monitoring and evaluation of health and safety issues. CDM 2007 has had positive effects on the construction safety in the larger companies (Donaghy, 2009, p. 26) and the cost issues associated with implementation were felt to be modest enough to be acceptable (HSE 2011). At the same time, it has been acknowledged that CDM 2007 has been less successful among smaller companies – for example, there are reports that, even when a Principal Contractor is on site, many sub-contractors claim ‘they carry the burden of financial and health and safety risk even if the Principal Contractor still has legal responsibility for the project’ (Donaghy, 2009, p.29).

The problem areas identified by HSE (2011) represent other disadvantages at the practical application stage of CDM 2007 – levels of bureaucracy, for example, and the fact that project teams are not always wholly integrated or communicating effectively; there is some doubt as to whether competence checks are fully carried out by all organisations, for example when employing migrant workers (Donaghy, 2009, p. 43).


Industry Experiences of CDM 2007: What Works Well and What Causes Problems

The HSE (2011) evaluation derived its survey sample from across the range of construction professionals who were duty holders in the industry (total 565). These included 200 designers, 145 principal contractors and 46 sub-contractors, as well as 103 repeat clients and 16 occasional clients. Frontline Consultants, who carried out the research work on behalf of HSE, had derived five objectives and reported on these in terms of the extent to which each had been met. These are stated in the report as follows (Section 4.2, p. 13):

Simplifying the Regulations to improve clarity – so making it easier for duty holders to know what is expected of them
Maximising their flexibility – to fit with the vast range of contractual arrangements in the industry
Making their focus planning and management, rather than the plan and other paperwork – to emphasise active management and minimise bureaucracy
Strengthening the requirements regarding coordination and cooperation, particularly between designers and contractors – to encourage more integration
Simplifying the assessment of competence (both for organisations and individuals) – to help raise standards and reduce bureaucracy.

The findings of the pilot study suggest that Objective 1 is being met:

‘ … most of the respondents (87%) agreed that CDM 2007 was clearer than CDM 1994, and 96% agree that they clearly understand what their duties are under CDM 2007’ (HSE, 2011, Section 11.3).

Objective 2 is also deemed as being met as respondents confirmed they are using a number of contractual forms for CDM 2007:

‘ … most of the respondents (89%) agree that CDM 2007 can be used with the types of contract used in the construction industry’ (Section 11.3).

Results for Objective 3 were, however, less clear-cut: half of the respondents (46%) disagreed that the introduction of CDM 2007 was assisting them in lessening bureaucracy:

‘ … whilst most of the respondents (85%) agree that CDM 2007 assists in managing health and safety’ (Section 11.3).

Objective 4 was another instance where results were mixed, and the researchers concluded this objective, as with Objective 3, had only been partially met. Whilst half of the respondents agreed that CDM 2007 has helped bring about integrated teams (48%) plus better communications and information flow between project team members (50%):

‘ … a significant majority (ranging from 67% to 81% for the four relevant questions) of the respondents agree that CDM 2007 assists in facilitating coordination and cooperation’ (Section 11.3).

For Objective 5, three quarters of the respondents (76%) agreed that CDM 2007 was helpful in the assessment of the competence of duty holders:

‘ … most (83%) agreed that the client thoroughly assessed the competence of those organisations they appointed to work on the project; and most respondents (86%) agreed that the organisation who appointed them made a good job of assessing the competence of their organisation’ (see Section 11.3).

Impact of CDM 2007 on SMEs

Broadly, then, CDM 2007 was assessed by Frontline Consultants as more accessible than its predecessor, CDM 1994, and health and safety duties were understood; it is fit for purpose and can be used with most types of construction contract; it is assisting with improved health and safety; it helps with the facilitation of co-ordination and co-operation and methods of assessing competence were deemed to be proficient.

Administrative Duties

One of the less clear results relates to the degree of bureaucracy required to implement CDM 2007 – and although the HSE survey sample was not broken down according to scale of operation, it is likely that SMEs in particular will have found that extra paperwork and administrative systems have proved more onerous than the larger-scale, better-resourced construction companies. Also, the priorities of SMEs are often located in other areas of their business. Donaghy (2009) notes important differences between what affects larger and smaller businesses:

‘ … it is also important to note the disparity between large contractors and the significant number of small and medium sized enterprises (SMEs) making up the rest of the industry. While some larger companies have embraced the importance of tackling ill health issues it is often a matter of last resort for SMEs who are more focussed on the necessity to ‘make do’ and get the job done. For this group sometimes even the provision of adequate temporary welfare facilities proves a step too far.’ (Donaghy, 2009, p.46)


In the same way, although costs were not felt to be prohibitive by those participating in the HSE research: ‘the costs were viewed as moderate or lower’ (HSE 2011), Donaghy (2009) reported that some organisations had spent a great deal of money in order to comply. Further, she reported that although Principal Contractors might be seen to have the responsibility for CDM 2007, implementation was often passed on to sub-contractors:

‘… indeed there is more evidence of negative effects of price and delivery conditions being passed down to these levels [that is, second or third levels in the supply chain], the implication being that the further down the chain you go the more compromised the financial and safety considerations’ (Donaghy, 2009, p. 29).

Ben Williams (2010) expands on this in a study of SMEs in the south east of England, which investigates the implications of CDM 2007, and whether the current recession is having detrimental effects on their management of health and safety. Williams notes the disproportionate number of safety injuries and deaths on sites operated by SMEs, but did not find any ‘significant’ cost implications, nor any evidence that the economic situation had adversely affected health and safety management. Rather, he opined that investing in health and safety on construction sites for all workers is of the utmost importance, and that there is a lack of training and knowledge on heath and safety issues among SMEs (Abstract, p1).

Collaborative Working

Another problematic aspect of the implementation is that, whilst overall companies thought CDM 2007 assisted with co-operation, only up to half thought it had made an impact on improving communications and integration within teams. This is a disappointing finding, as Donaghy noted in 2009 that:

‘Another feature of the construction industry, with a few exceptions, is the absence of pre-planning and integrated teams before work starts on site – points constantly raised by Lathamb and Egan’ (p. 22).

Donaghy felt team working was important, particularly because of the high level of self-employed construction workers, which she believed was a factor

‘ … in low levels of training, job security, the likelihood of reporting serious accidents or unsafe practices or of encouraging team-working in the industry and all these factors are linked to the underlying causes [that is, fatalities at construction workplaces]’ (p. 36).

Research limitations

The research projects examined to provide secondary data were limited by the scope of the studies. Donaghy (2009) conducted research into 26 recent fatal accidents, Williams (2010) studied SMEs in the southeast of England and HSE (2011) evaluated the opinions about CDM 2007 of 565 construction professionals. In considering these three reports, therefore, the individual limitations of each one has to be acknowledged, and as yet no research into the impact of CDM 2007 on the whole of the construction industry in the UK has been undertaken. The research into fatal accidents is most useful for its qualitative data, as statistically it is a small sample. The HSE research does not differentiate between SMEs and larger companies, and the investigation into the costs of CDM 2007 is limited to one geographical region and not, therefore, representative of the UK as a whole.



CDM 2007 is a suitable and appropriate vehicle for those construction companies that have sufficient resources to deliver it, that is knowledge and understanding of the key requirements, skilled employees who are adequately trained and inducted into the health and safety procedures, and a desire and capacity to work collaboratively with others.

In particular, architects and designers need to have experience and/or knowledge of the construction industry to carry out an effective risk assessment at the design stage; and Principal Contractors must not seek to offload an inordinate level of responsibility for health and safety to sub-contractors.


There are some costs involved, and the consensus of opinion is that these are not prohibitive, although Williams (2010) warns of the need to monitor the selection of contractors by SME clients – where tender prices might have dominance over considerations of competency, and health and safety. Also, the additional costs of appointing a CDM Co-ordinator on notifiable projects may place a genuine burden on either or both of the smaller SME clients and SME contractors.

Scale of Operation

‘Small’ SMEs can have up to 50 employees, which is considered to be a sizable company in the construction industry and it is most likely to be at micro level (less than 10 employees) that difficulties with the implementation of CDM 2007 will be most keenly felt amongst the companies that Donaghy (2009) referred to as ‘those below the Plimsoll line’ (p. 11). These companies are unlikely to be able to afford to release employees for training courses, for example, even where grants for training are available (p. 48), or to afford the welfare facilities that are necessary for notifiable contracts. In the same way, competence checks may prove onerous for smaller organisations, and simplified competence procedures could be devised for micro enterprises, in line with the scale of the work they are undertaking.

Better Consultation

One of Donaghy’s recommendations was that the construction industry should renew its efforts ‘to establish genuine consultative frameworks to encourage greater worker participation’ (2009, p. 18). This is one route to improving the lack of communication reported by the HSE (2011) survey, and it would also address the isolation of the micro companies and serve to rebalance their priorities so that health, safety and competence feature alongside profitability, or, indeed, economic survival.

Vulnerable Workers

There are grounds to be concerned about migrant workers, young workers and experienced workers aged 50 years and over. Special provision for the mandatory training – and re-training in the case of older workers – should be given some consideration and it may be that grants currently available would reduce accident and injury rates significantly, if specifically targeted at these groups, at least initially.

Bibliography and References

Construction Enquirer, 2011. Plasterer in coma after 6ft ladder fall. Online. Available at: [Accessed 12th December 2011].

Construction News, 2011. Company fined after workers engulfed in electrical fireball. Online. Available at: [Accessed 12th December 2011].

Deakin, S. and Koukiadaki, A. 2007. The Capability Approach and the Reception of European Social Policy in the UK: The Case of the Information and Consultation of Employees Directive – Section 5.4 Establishment and operation of information and consultation of employees arrangements: Heathrow Terminal 5. Centre for Business Research – University of Cambridge. September.

Department for Business Enterprise & Regulatory Reform (BERR). 2008. Vulnerable Worker Enforcement Forum. Final Report and Government Conclusions. August.

Donaghy, R. 2009. One Death is too Many. Online. Available at: [Accessed 12th December 2011].

Egan, Sir J. 1998. Rethinking Construction. As cited in Donaghy (2009, p. 22).

European Commission, 2011. SME Definition. Online. Available at: [Accesssed: 7th December 2011].

Gambatese, J. et al, 2009. Industry’s perception of design for safety regulations. As cited in Morrow (2010, p.13).

Health & Safety Executive, 2011. Summary of duties under the CDM regulations under CDM 2007.

Online. Available at: [Accessed 7th December 2011].

HSE, 2003. RR156 – Causal Factors in Construction Accidents. Online. Available at: [Accessed 12th December 2011].

HSE, 2011. RR845 – Evaluation of Construction (Design and Management) Regulations 2007: Pilot study. Prepared by frontline Consultants. Online. Available at: [Accessed 13th December 2011].

HSE Legal Requirements, 2011. Construction (Design and Management) Regulations 2007. Online. Available at: [Accessed 7th December 2011].

Irwin Mitchell Solicitors and the Centre for Corporate Accountability (2009). Commissioned and Jointly Published the report: Migrant Workplace Deaths in Britain. 31 March.

Latham, Sir M. 1994. Constructing the team. As cited in Donaghy (2009, p. 22).

Morrow, S. 2010. Does CDM enhance the designer’s role to safer and healthier constructionConference paper. Online. Available at:…PDF [Accessed 13th December 2011].

Muddiman, A. 2001. The Construction Industry’s response to deaths at work – Grasping the nettle. MA dissertation.

Pye Tait Consulting, 2010. CDM – The Client Voice. Report on behalf of the Construction Clients’ Group and the British Property Federation. Online. Available at:…/CCG-Technical-Annex-Mar10…. [Accessed 13th December 2011].

The Construction (Design and Management) Regulations 2007. Online. Available at: [Accessed 12th December 2011].

The Telegraph, (2011) Four die in accident at engineering firm. Available at: [Accessed 12th December 2011].

Trade Union Congress (TUC). 2007. Safety & Migrant Workers: A practical guide for safety representatives. June.

Union of Construction Allied Trades and Technicians (UCATT). 2006. Worker Engagement in the Construction Industry. October.

Williams, B. 2010. The CDM Regulations 2007: the cost of health and safety management for small and medium enterprises in the South East. Dissertation. Online. Available at: [Accessed 13th December 2011].

Free Essays

Factors that Affect Selection of Manufacturing Process Design at Apple Inc.


Process design is defined as the alignment of processes to satisfy customer needs while at the same time meeting the set objectives of the organization (Becker et al., 2003). All businesses, regardless of whether they are service based or product based, have the obligation of delivering quality products and services to customers (Matsa, 2011). There are different process designs that can be adopted in organizations, which include serial and parallel processes. The serial approach of involves the alignment of activities to take place one after the other in a defined sequence. Whereas this approach is known to be simple and easy to understand, one disadvantage associated with it is that processes take a longer time to accomplish and thus, reduces capacity (Iravani et al., 2005). Parallel processes, on the other hand, involve the execution of two or more processes in a simultaneous manner. This approach can lead to a reduction in the flow time or an increase in capacity. However, this depends on whether the parallel operations are set to carry out unlike or like operations (Mascitelli, 2004).

Selection of the most ideal service or manufacturing process design for a company depends on several factors. This explains the differences that may exist in the way companies within the same industry may design their processes differently. According to Swift and Booker (2003), selection of the ideal process depends on the nature of the marketplace, the business, and the product itself. With reference to Apple Inc, this paper aims at providing a critical discussion of the key factors that influence the selection of a service or manufacturing process design. It also seeks to discuss ways in which project management principles aid operations managers in introduction of changes to operation processes or systems.

A. Factors that govern selection of manufacturing processes design


The key objective of any business is to maximize their profits while offering quality to customers. Therefore, one of the strategies that can be used in achievement of this objective is ensuring that the processes being undertaken are as cost effective as possible (Swift & Booker, 2003). At Apple, its main products are iPhones, iPads and MacBook computers (Apple Inc., 2014). Whereas its products are known for being slightly more expensive than those manufactured by other brands, the company also increases through cutting costs of production. Pries and Quigley (2013) define process costs as the investments that are required to be incurred in the course of the manufacturing process. These comprise of the expenditures incurred in purchase of equipment, labour and raw materials, and capital costs. There are different approaches that can be used in management of costs incurred in processes (Huang et al., 2005). For Apple, costs are cut by outsourcing components and labour. In the overall electronics and technology industry, most of the components are obtained from Asian manufacturing industries, which are known to be both cheaper and more versatile than those from other parts of the world (Roy et al., 2012). In addition to the fact that outsourcing helps in reducing manufacturing costs, outsourcing of activities that are non-core also enables companies to focus more on their core activities like designing of new products (Polychronakis & Syntetos, 2007). It also helps companies to share their risks.

Components used to build Apple products are obtained from over 150 companies that are spread all over the world. With reference to an estimate made by Milian (2012), the costs that were incurred in the production of the iPhone 4s were at $196. Given that a unit of the iPhone 4s at the time of its production was at $649, the cost reduction strategy meant that the gross profit that was obtained from a single unit was $453 (Milian, 2012). This explains why amidst the stiff competition in the electronics and computer industry, Apple Inc has managed to maintain high profit levels. It had an annual profit of $41.7 billion in 2012, making it the second most profitable company after Exxon Mobil (The Huffing Post, 2013). Apple also further reduces the costs it incurs in manufacture of its products by ensuring that they create partnerships with many companies to encourage competition and as a result, it gets favourable deals (Milian, 2012). Even with the high success that has been achieved by the cost reduction strategy at Apple, the company has faced several criticisms. For instance, one of the companies in China that takes part in the assembly of the company’s products, Foxconn, has a bad reputation for mistreatment of its employees (Chamberlain, 2011).


Operation managers have the role of ensuring that the goods or services that are offered to clients are of optimum quality (Mukherjee, 2006). Apart from reducing costs to maximize profits, companies also select their manufacturing process designs based on the quality of their outcome. Manufacturing processes that do not produce products in the state that was intended by designers or fail to cater for the need of customers in the market ought to be avoided, regardless of how cost-effective they may be (David Bamford, 2010). At apple, the high levels of quality have enabled it perform well in the industry with a large number of customers often ready to purchase new products that they manufacture (West & Mace, 2010). Even though it can be argued that quality control processes ensure that any quality issues can be solved before products are delivered to clients, it is more productive if the original manufacturing process is flawless (Creese, 2013).

To ensure that quality is maintained or improved, the staff members at Apple are often encouraged to be creative and innovative so as to come up with ideas of improving quality. Another approach that ensures quality of processes at apple is through carrying out constant reviews of products. According to Lashinsky (2011), Apple has a program that involves carrying out a review of products every Monday. This enables the company to make the necessary improvements or corrections in case an issue is identified. The issue of quality at Apple has been deeply embedded in the company’s organizational culture, and employees are aware of the need to pay attention to details (West & Mace, 2010). The keen attention that is paid to product and process details at Apple has been among the key factors that have led to the consistency in the company’s market performance. By incurring an extra cost to ensure the manufacturing process is of the required quality, companies are able to satisfy their customers and build string brands in their respective industries of operation (Talib et al., 2011).


The dynamism that characterized the present-day business environment also means that organizational operations should be as flexible as possible so as to maintain their relevance (Merschmann & Thonemann, 2011). Flexibility is the ease at which manufacturing processes can change certain aspects or qualities of products. These range from the shape, materials used to manufacture the product or the finish (Creese, 2013). Lack of flexibility in manufacturing processes may make it difficult for companies to satisfy the ever-changing needs in the market. It may also make quite expensive to replace the existent processes with newer ones (Chou et al., 2010). The need for flexibility is more intense in the computer industry because is one of the most flexible industries. With reference to Moore’s law, capabilities of many electronic devices in the market often change at least once in two years. As technological advancements increase, this pace is bound to increase (Mollick, 2006).

Apple is also well known for manufacturing upgraded versions of previous products approximately on a yearly basis. For instance, between 2007 June and 2013 September, a total of eight versions of the iPhone have been manufactured by Apple (Bergmann, 2013). The improvements that are made in the Apple’s products incorporate the innovative ideas of designers as well as the feedback the company obtains from clients. The need for flexibility in the manufacturing designs also helps companies to stay ahead of the competition on their industries of operation. Apple faces competition from many companies that also frequently upgrade their product to match the market needs (Carbaugh, 2013).

Other Factors

Environmental Sustainability

Apart from the three aforementioned factors affecting selection of the manufacturing process design, there are numerous other factors that operations managers put into consideration. One of these is the potential impact that the process may have on the environment (Vezzoli & Manzini, 2008). With the current focus that the international business community on environmental sustainability, companies ought to ensure that they select processes that have the least adverse impact on the environment (Geels, 2011). In an effort to lessen its carbon footprint, one of the strategies that Apple has used is utilization of renewable energy in the company’s operations. These include solar, geothermal, hydro and wind energy (Apple Inc., 2014). However, the company has been under criticism for ignoring the adverse environmental impacts that the company’s operations in China are causing.

Quantity of products

The quantity of products that the company produces for customers also determines the choice of the appropriate manufacturing process. In a situation where companies manufacture single products to fit the specifications of clients, a one-off approach may be appropriate (Jones & Robinson, 2012). On the other hand, if the company deals in the manufacture of products in large quantities, the mass production approach is preferable (Jones & Robinson, 2012). Since Apple manufactures products to satisfy millions of customers worldwide, the manufacturing approach that it utilizes is mass production. By September 2013 alone, Apple sold approximately 33.8 million iPhones, 4.6 million Macs and 14.1 million iPads (Apple Inc, 2013).

External Regulations

Standards and regulations, usually set by different governing bodies also affect the selection of the manufacturing approach. Some of the aspects that that are focused include environmental impacts and specific quality standards that ought to be delivered to customers (Jones & Robinson, 2012). For instance, electronics manufacturing companies are supposed to adhere to the set standards in terms of the air emissions, solid and hazardous wastes and effluents (Multilateral Investment Guarantees Agency, 2010). These regulations are also applicable to Apple. Regardless of how cost effective and flexible a manufacturing process can be, companies have the obligation of adhering to the set standards to avoid getting into legal issues (Bamford & Forrester, 2010).

The factors that have been highlighted above are relevant to all companies that deal in the manufacture of products for their customers. Whereas it is impossible to optimize all the mentioned aspects of manufacturing processes, companies ought to make a comprehensive evaluation of their manufacturing processes to ensure that they deliver quality to their clients and also meet their goals and objectives. Even though Apple has had a few challenges and controversies in its product manufacturing processes, it has managed to maintain its strong position in the electronics industry. This is partly attributed to the effective selection of manufacturing process designs.

B. How the main principles of project management help operation managers to introduce change

Principles of Project Management

Change is an inevitable issue in organizational operations, systems and processes. Therefore, effective project management always puts this into consideration to ensure a smooth transformation from one state of the organization to the other (Boje et al., 2012). Some of the principles of project management include understanding the stages that projects go through from the beginning to the end, possessing good management (controlling), leadership and communication skills and working in the interest of all company stakeholders (David Bamford, 2010). These principles play a very vital role in situations where changes are to be implemented in certain processes or systems within the organization.

Possessing management skills makes project managers capable at effectively implementing the change process (Berkun, 2008). It is important to understand that implementation of change can be more successful if the employees and other organizational stakeholders are involved throughout the process. Failure to effectively communicate to them about the changes to be made may bring about resistance to the process (Vida, 2012). In addition to this, employees may also find it difficult to adjust to the implemented changes. There are several models of change management that have been previously suggested by researchers. These include the eight-step model of change suggested by Kotter (2007).

Henry (2012) also points out several principles of project management, which can also help operations managers in introducing change in systems or processes in the organization. One of these is commitment. For projects involving change to be effectively implemented, managers are supposed to lead by example in showing their unlimited commitment and employees will follow. The other is referred to as the tetrad trade off principle, which is based on the principle that for a project to be implemented successfully, the scope, cost, time and quality have to be in equilibrium and attainable (Sarah & Dixon, 2013). There is also the strategy principle, which defines the planning and implementation procedure. This principle is based on the fact that for a project to begin and end successfully, there are certain procedural activities that ought to be undertaken. Effective incorporation of these principles by operations managers can help in ensuring that changing processes or systems in the organization is undertaken smoothly (Hongjun & Yajia, 2012).

Introducing Change in Processes/Systems

To effectively introduce change in company systems or processes, it is necessary to have a comprehensive plan (Kotter, 2007). This typically involves determining the type of change that will be effective for the company and notifying employees and stakeholders on the imminent change process. Another measure involves building awareness among employees and other stakeholders and building the capacity that will be needed in the process (Jones & Robinson, 2012). Some of the measures that can be undertaken in this stage of preparation include making an announcement of the intended change and when it is expected to take place. It also involves recruitment of some employees that will take up some of the tasks that are involved in the change process (Kotter, 2007). In cases where training is required before the change process is implemented, the recruited employees should be provided with the adequate training. According to Vida (2012), communication skills are quite instrumental in the change process. Project managers need to create an avenue through which employees can give feedback, and eliminate the bureaucratic barriers that may exist to hamper the ease flow of communication within the organization (Henry, 2012).


As companies compete to strengthen their brand positions and increase their market shares, there are several underlying strategies that they use. One of these is the selection of the most ideal manufacturing or service process design that will ensure customer satisfaction is achieved while at the same time contribute to the achievement of organizational goals and objectives. This paper has provided an in-depth and critical discussion of some of the factors that affect selection of the manufacturing processes, with a reference to Apple Inc, one of the leading companies in the electronics industry. Some of the key factors that have been discussed include costs, quality and flexibility of the process. As presented in the paper, Apple Inc has managed to maintain a strong brand position partly because of the effective selection of manufacturing processes. The paper has also highlighted the ways in which the principles of project management can help operations managers to introduce change in organizations. Future research on this subject could address the challenges that project mangers undergo in selecting the ideal manufacturing or service process design.


Apple Inc, 2013. Apple Reports Fourth Quarter Results. [Online] Available at: [Accessed 8 January 2014].

Apple Inc., 2014. Apple and the Environment. [Online] Available at: [Accessed 8 January 2014].

Becker, J., Kugeler, M. & Rosemann, M., 2003. Process management: a guide for the design of business processes: with 83 figures and 34 tables. Munich: Springer Verlag.

Bergmann, A., 2013. iPhone Evolution. CNN Money, 12 Novemner.

Berkun, S., 2008. Making Things Happen: Mastering Project Management. California: O’Reilly.

Boje, D., Burnes, B?. & Hassard, J?., 2012. The Routledge Companion to Organizational Change. New York: Routledge.

Carbaugh, R.J., 2013. Contemporary Economics: An Applications Approach. Edmonds: M.E. Sharpe.

Chamberlain, G., 2011. Apple’s Chinese workers treated ‘inhumanely, like machines’. The Guardian, 30 April.

Chou, M.C., Chua, G.A., Teo, C.-P. & Zheng, H., 2010. Design for Process Flexibility: Ef?ciency of the Long Chain and Sparse Structure. Operations Research, 58(1), pp.43-58.

Creese, R., 2013. Introduction to Manufacturing Processes and Materials. New Jersey: CRC Press.

Bamford, ?D. & Forrester, P., 2010. Essential Guide to Operations Management: Concepts and Case Notes. New Jersey: John Wiley & Sons.

Geels, F.W., 2011. The multi-level perspective on sustainability transitions: Responses to seven criticisms. Environmental Innovation and Societal Transitions, 1, pp.24-40.

Henry, A., 2012. Understanding Strategic Management. Oxford: Oxford University Press.

Hongjun, L. & Yajia, G., 2012. Study on Chain Companies Human Resources Management. Information and Business Intelligence, 267, pp.227-32.

Huang, G.Q., Zhang, X.Y. & Liang, L., 2005. Towards integrated optimal configuration of platform products, manufacturing processes, and supply chains. Journal of Operations Management, 23(3), pp.267-90.

Iravani, S.M., Van Oyen, M.P. & Sims, K.T., 2005. Structural flexibility: A new perspective on the design of manufacturing and service operations. Management Science, 51(2), pp.151-66.

Jones, P. & Robinson, ?P., 2012. Operations Management. Oxford : Oxford University Press.

Kotter, J.P., 2007. Leading Change: Why Transformation Efforts Fail. Harvard Business Review, pp.1-10.

Lashinsky, A., 2011. How Apple works: Inside the world’s biggest startup. [Online] Available at: [Accessed 9 January 2014].

Mascitelli, R., 2004. The Lean Design Guidebook. Northridge: Technology Perspectives.

Matsa, A., 2011. Competition and Product Quality in the Supermarket Industry. Quarterly Journal of Economics, 126(3), pp.1539-91.

Merschmann, U. & Thonemann, U.W., 2011. Supply chain flexibility, uncertainty and firm performance: an empirical analysis of German manufacturing firms. International Journal of Production Economics, 130(1), pp.43-53.

Milian, M., 2012. How Apple cuts costs in building its gadgets. CNN, 7 February.

Mollick, E., 2006. Establishing Moore’s law. Annals of the History of Computing. IEEE, 28(3), pp.62-75.

Mukherjee, P., 2006. Total Quality Management. New Jersey: Prentice Hall.

Multilateral Investment Guarantees Agency, 2010. Environmental Guidelines for Electronics Manufacturing. [Online] Available at: [Accessed 9 January 2014].

Polychronakis, Y.E. & Syntetos, A.A., 2007. ‘Soft’ supplier management related issues: An empirical investigation. International Journal of Production Economics, 106, pp.431-49.

Pries, K.H. & Quigley, ?J.M., 2013. Reducing Process Costs with Lean, Six Sigma, and Value Engineering Techniques. New Jersey: CRC Press.

Roy, K.C., Blomqvist, ?H.-C. & Clark, ?., 2012. Economic Development in China, India and East Asia: Managing Change in the Twenty First Century. Cheltenham: Edward Elgar Publishing.

Sarah, E. & Dixon, A., 2013. Failure, Survival or Success in a Turbulent Environment: the dynamic capabilities lifecycle. Chartered Management Institute, 4(3), pp.13-19.

Swift, K.G. & Booker, ?J.D., 2003. Process Selection: from design to manufacture. Oxford: Butterworth-Heinemann.

Talib, F., Rahman, Z. & Qureshi, M., 2011. A study of total quality management and supply chain management practices. International Journal of Productivity and Performance Management, 60(3), pp.268-88.

The Huffing Post, 2013. Fortune Global 500: Top 10 Most Profitable Companies in The World. Huff Post, 7 August.

Vezzoli, C.A. & Manzini, E., 2008. Design for Environmental Sustainability. Milan: Springer.

Vida, K., 2012. The Project Management Handbook: A Guide to Capital Improvements. New York: Rowman & Littlefield.

West, J. & Mace, M., 2010. Browsing as the killer app: Explaining the rapid success of Apple’s iPhone. Telecommunications Policy, 34(5), pp.270-86.

Free Essays

A critical appraisal of the integrity of a HACCP Plan and the design of an effective, efficient, evidence based improvement strategy


Food safety is of primary importance in any food business. In this report, a review of the food processing in the Cheese-4-All company reveals important failures. The company received customer complaints that metals were present in the cheese and caterers were complaining of broken seals in the cheese blocks. On analysis, only one machine performs the cutting and sealing of the cheese blocks. The metal present in the cheese came from the cheese cutter. While a metal detector is placed at the end of the assembly line, it failed to detect the metal in the cheese. Further, members of the staff failed to check for broken seals prior to delivery. It is recommended that safety checks should be conducted and incidents recorded. The metal detector should be investigated if it is sensitive in detecting metals and whether it is calibrated correctly. An investigation should also be made to determine if the detector is in the right position to make timely detection and rejection of cheese. A bin for the cheese rejects should also be in place to contain those that have embedded metals. Members of the staff should receive proper training on how to check for broken seals from packing to delivery. It is recommended that the safety culture in the workplace should be evaluated to determine if employees have adopted the culture of safety in their work. Strategies to improve food safety include placing the metal detector in the right position in the assembly line and calibrating it correctly and improving the safety culture of the staff members.


Awareness of food safety, threats to food supply and increased consumption of packaged and processed food have driven the need to ensure that all food processing met quality standards (Yiannas, 2008). Food retailers have the responsibility to ensure that their food products are safe to eat and free from contamination (Wallace et al., 2011). Safeguarding the health of the consumers is of utmost important in the food industry. Improving food safety does not only focus on testing of products or ensuring that all standards in food processing are met. It also deals with investigating the behaviour of the employees and how this behaviour affects food safety. Yiannas (2008) argues that that there should be an integration of behavioural and food sciences in order to manage risks associated with food safety. Merger of these two fields would result in a systems-based approach in food safety management.

The Food Standards Agency (FSA, 2013) in the UK has stressed the importance of food safety in the food business. Creation of a Hazard Analysis and Critical Control Point (HACCP) ensures that food safety requirements in the UK are met. This report aims to critically appraise the HACCP plan currently in place at Cheese-4-All. This company supplies vacuum-packed cheddar cheese blocks in sizes of 12g to 400g to small retailers. It also supplies bags of grated cheese to catering industries. This time, the products are contained in modified atmosphere packaging with potato starch. Currently it employs 30 staff with level 2 food hygiene training. Shelf life testing of the products confirmed that grated cheese and cheese blocks had a shelf life of 3 months. Pest control and cleaning regimen were in place. However, the latter is not documented. The structure of the production premises is also deemed as within standard. However, complaints of broken seals and metals present in the cheese reached the owners of the company. An analysis of the food processing will be done and recommendations on how to improve the HACCP system will be made in this report. An evidence based improvement strategy to achieve compliance will be given in the latter part of this report.

Critique of Cheese-4-All HACCP Plan

Lawley et al. (2012) explain that a HACCP is essential in maintaining the safety of individual food products. The main aim of this plan is to ensure that each stage in the food production process is safe. The main objective of this report is to identify the hazard and create controls that would prevent the occurrence of this hazard. The main issues of the Cheese-4-All company are traced to the customers’ complaint of the presence of a metal wire on a block of cheese and returned food products from caterers due to broken seals of the bags.

In the first issue, the metal wire represents a physical hazard for consumers. A physical hazard is described as any foreign material present in dairy products that could cause injury or illness to the consumer (Marriott and Gravani, 2006). Mortimore and Wallace (2013) emphasise that a physical hazard results from lack of control of a process or a piece of equipment in the production chain. A number of factors have been identified to contribute to the presence of a physical hazard. Amongst these, poorly maintained equipment and the employees’ inattention to the details of the food production process are the most important factors that contribute to the physical hazard (Wallace et al., 2011). Meanwhile, the broken seals of the bags represent a biological hazard. This type of hazard results from exposure of food to pathogens (Smith and Hui, 2008).

A closer investigation of the company’s issues reveals that the cutting and sealing of the vacuum packs of the cheese blocks are done by only one machine. The machine has a metal cheese wire used to cut the cheese. At the same time, this machine also automatically vacuum seals the cheese blocks. A metal detector, which serves as a control, is found at the end of the production line. This equipment could have detected any embedded metal in the cheese. At this point, poor attention to details could have contributed to the failure of the staff to detect the metal in the cheese. Instead of allowing the cheese blocks to pass through the metal detector before removing and labelling them, some members of the staff might fail to recheck whether the blocks have passed through the metal detectors. Similarly, an opportunity to check the integrity of the vacuum seals of the cheese block is also presented during the labelling. However, the staff failed to document whether all cheese blocks are safely sealed. On the other hand, the staff manually places grated cheese in plastic bags together with potato starch while the modified atmosphere packaging machine seals the packs.

Recommendations for Improvement

In this report, the issue of the presence of metal fragments in the block of cheese and broken seals of cheese packs will be addressed.

Metal Fragments in the Block of Cheese

In-line metal detectors are present in the production line. This is check point is crucial and is met by the company. However, it is recommended that these in-line metal detectors should be fitted with automatic rejection systems (Smith and Hui, 2008). In the company’s case, there was only one metal detector at the end of the assembly line. It was also not fitted with an automatic rejection system. The British Retail Consortium (2013) reiterates that metal detection protects the customers and should be part of any food protection system. However, there are cases where metal detection does not provide the consumers with significant added protection. In this case, the British Retail Consortium (2013) adds that exceptions should be made only when there is indeed no need for metal detection of the product. Hence, it is still vital that companies should make justifications why metal detection is not needed.

The need of metal detection in a food company is highlighted when customers complain of metal in their food products. Consequences of this failure range from loss of credibility and loss of customers and bad publicity (Wareing, 2010). In worst scenario, metal present in the product might cause injury to the customer and result to prosecution (Academic Press, 2013). There are various possible causes of metal detector failure. The Academic Press (2013) explains that the metal detector might be experiencing mechanical failure or is not properly calibrated. The wrong pieces of metals are used during sensitivity check or the company used the incorrect metal detector. The succeeding table lists down the rest of possible causes of metal detector failure:

Table 1. Causes of Metal Detector Failure

Possible Causes of Metal Detector Failure
Metal detector is placed in the wrong place in the assembly line
Faulty rejection mechanism or there is no synchronization with the rejection system and the detector
There is no control of the rejects
Checks are not done regularly for the metal detector. In cases where checks are done, these are also performed incorrectly
In cases where checks of metal detector reveal some failures, these are not recorded or corrective actions are not taken.
Staff members of the organisation are not trained to perform metal detector checks.
While staff members receive training on performing metal detector checks, the effectiveness of these trainings are not verified in actual practice.
Workplace culture issues also play a role in influencing staff members not to take responsibility in performing necessary checks.

Source: Academic Press (2013, p. 336)

Broken Seals of Cheese Packs

The British Retail Consortium (2013) emphasised that food safety should be a priority amongst those in the food business. On analysis of the Cheese-4-All company, caterers complained of broken seals. Issues are often identified only when customers began complaining about the safety of the food that they order (Bougherara and Combris, 2009). This represents some breaches in safety procedures in the company. For instance, safety checks should be conducted once cheese blocks or grated cheese are sealed, before they are taken to or taken out of chillers. During the labelling process, it is also important that the staff conduct a check whether the seals are still in place or if there are broken seals in the cheese packs.

Improvement Strategies

Lawley et al. (2012) explain that many of the food safety legislation that are now in force in countries in Europe, including the UK, are formed as a result of collaboration between food authorities in the different countries. Representatives of the European Commissions are responsible for creating food safety legislations that are also used as template of food authorities in different countries (Lawley et al., 2012). For example, the European Commission has set out EC Regulation No. 852/2004 that set standards for hygiene on foodstuffs. In addition, the 2006 Food Hygiene Regulations also provide standards for food safety. Using information from these regulations, it is recommended that strategies should be in place to ensure the absence of metals in the cheese blocks and to prevent broken seals of the packs in the Cheese-4-All company.

First, metal detectors should be checked every hour with test pieces (Robertson, 2013). Results should be recorded to assess the sensitivity of the metal detector. It is recommended that safety incidents should be recorded to ensure that staff learn from the experience and prevent the occurrence of similar incidents in the future (Arvanitoyannis, 2012). For example, the analysis reveals that only one machine is involved in cutting the cheese with metal cheese wire. It is also the same machine involved in vacuum packing of the cheese. It is suggested that the machine should be periodically checked to ensure that it is working properly. Second, qualified staff should perform calibration of the metal detector and ensure that it is in the proper place in the assembly line (Academic Press, 2013). Third, lockable receptacles should be in place to ensure that rejects are accommodated (Academic Press, 2013).

Fourth, training staff to conduct safety checks of the food packs after sealing of the cheese, during refrigeration and before delivery. This is necessary to protect consumers from food poisoning (Montville and Matthews, 2008). Finally, it is suggested that the safety culture of the workplace should be investigated to determine the perceptions and current practice of the workers on food safety. Mortimore and Wallace (2013) argue that the safety culture of the workplace is a crucial determinant in whether safety regulations are implemented and institutionalised. In many cases, the lack of a safety culture leads to failure in the system.


In conclusion, this report shows that safety checks should be regularly done to prevent safety incidents such as presence of metals in food or having broken food seals. Consequences of these incidents include loss of customers and possible litigations from consumers who are harmed from ingested metals. Broken food seals present a health hazard since it could cause food contamination. In turn, this might lead to poisoning of the food consumers. An analysis of the Cheese-4-All Company reveals that safety checks are breached during food processing. A metal detector is present in the end of the assembly line but failed to detect the metal present in one of the cheese products. Possible causes of this failure are discussed in the report. On the other hand, the broken seal also indicates failure on the part of the staff to thoroughly check the packaging of the cheese. Finally, this report recommends performing regular checks of the machine used in cutting and sealing the cheese; ensuring that metal detector is working and placed in the proper position; and regularly performing checks on whether food seals are in place. It is also suggested that the work culture should be investigated to determine if safety is a priority in the workplace. This would help the company change the culture in the workplace and ensure that a culture of safety is practised.


Academic Press (2013) Encyclopedia of Food Safety, Washington, D.C.: Academic Press.

Arvanitoyannis, I. (2012) Modified atmosphere and active packaging technologies, London: CRC Press.

Bougherara, D. & Combris, P. (2009) ‘Eco-labelled food products: what are consumers paying for?’, European Review of Agricultural Economics, 36(3), pp. 321-341.

British Retail Consortium (2013) Global standard for food safety- guideline for fresh produce, London: The Stationery Office.

Food Standards Agency (FSA) (2013) Safer food, better business [Online]. Available from: (Accessed: 6th January, 2013).

Lawley, R., Curtis, L. & Davis, J. (2012) The Food Safety Hazard Guidebook, London: Royal Society of Chemistry.

Marriott, N. & Gravani, R. (2006) Principles of food sanitation, London: Springer.

Montville, T. & Matthews, K. (2008) Food Microbiology: An Introduction.

Mortimore, S. & Wallace, C. (2013) HACCP: A Practical Approach, 3rd ed., Preston, UK: Springer.

Robertson, G. (2013) Food packaging: Principles and practice, 3rd ed., Sound Parkway NW: Taylor & Francis Group.

Smith, J. & Hui, Y. (2008) Food processing: Principles and applications, London: John Wiley & Sons.

Wallace, C., Sperber, W. & Mortimore, S. (2011) Food Safety for the 21st Century: Managing HACCP and Food Safety throughout the global supply chain, London: John Wiley & Sons.

Wareing, P. (2010) HACCP: A toolkit for implementation, London: Royal Society of Chemistry.

Yiannas, F. (2008) Food safety culture: Creating a behavior-based food safety management system, Arkansas, USA: Springer.

Free Essays

A Report on Anaerobic Digestion: The design, planning, implementation and sustainability of a waste management operation at Manchester Metropolitan University Business School.

Executive Summary

In higher education establishments such as, Manchester Metropolitan Business School reducing the costs associated with waste management, energy consumption and carbon emissions has been high on the agenda.

This report seeks to examine one option for the business school. The feasibility of this option is discussed in detail and a number of considerations are examined to seek to ascertain how practicable using anaerobic digestion could be for the university.


The Manchester Metropolitan Business School is situated in the heart of Manchester and it consists of a number of campuses. To this end, managing waste on each of these is a primary concern as the business school has set its waste reduction targets as follows:

Reuse and recycle waste by 40% in 2012/13 – 2013/2014.
Achieve zero waste to landfill by 2020/2021 (MMUER, 2013).

This is for a number of reasons which are:

The rising costs of sending waste to landfill sites (HMC, 2013).
The carbon reduction targets stipulated by the Higher Education Funding Council for England (HEFCE) (HEFCE, 2010).
The carbon reduction targets specified in the Climate Change Act 2008 (HMSO, 2008).
The Waste (England and Wales) (Amendment) Regulations 2012 require all waste streams to be separated so that they can be recycled (HMSO, 2012).

The university produces many types of waste and has a duty to reduce these including those classed as bio wastes (Manfredi & Pant, 2013)by following the waste hierarchy, which is:

Recover (Glew al., 2013)

With these principals of waste management in mind one may consider managing a source of waste which is often overlooked or not managed appropriately. This is waste from food outlets. Food waste is biodegradable but it is often disposed of through the general waste stream, which costs approximately ?75 per tonne to send to landfill (HMC, 2013) with the costs applied by waste disposal contractors this become a significant sum that the university has to pay each year.

Food waste is often wet and therefore heavier than most other dry wastes that may be disposed through the general waste stream. Therefore, it costs more to dispose of than other forms of general waste. This increases the costs of disposal and the weight of the waste sent to landfill sites from the university unnecessarily, as there are other options available that may be utilised to dispose of this waste. These are:

Composting on campus.
Composting by using a waste contractor.
Anaerobic digestion.

Each of these options may be considered by the university to seek to reduce their costs and their environmental impacts which are linked to the disposal of food wastes.

Business strategy

This report seeks to assess the viability of the three options above which may be utilised to reduce food waste. The first of these options was composting food waste on campus. However, this is not possible due to the location and layout of the university campuses (see Appendix 1). Therefore, this option has been discounted. The second option was to pay a waste contractor to dispose of the university’s food waste and to compost it off campus. However, this will be at a cost as the waste is heavy and the campuses are located in different areas (see Appendix 1). Additionally, the collection and disposal of the waste will contribute to the university’s carbon footprint (HEFCE, 2010). Therefore, it is believed that the disadvantages of adopting this approach would far outweigh any advantages which may have been created by recycling food waste using this option. The third option, which has been identified, is disposing of food waste via anaerobic digestion. The viability of this option needs to be assessed in more detail. However, it does meet each of the principles which are highlighted in the waste management hierarchy (Glew, 2013). Therefore, this business strategy shall be explored to ascertain if this is a viable option for disposing of food wastes.

Anaerobic digestion is a process by which animal, food or plant waste is broken through the process of restricting air flow through the materials which encourages micro-organisms to produce biogases and disgestate. The digest is nutrient rich compost which may be reused as a fertilizer. However, though this process is a viable way through which food waste may be disposed the process produces gases such as, methane and carbon dioxide (Murto, 2013). Therefore, it is necessary to consider these to as these gases both contribute to global warming.

Operations strategy

The operation of anaerobic digestion facility requires a number of skills. For example, management, monitoring, loading and process review. Therefore, a number of factors will need to be considered by the operating strategy. In addition, to the human resources required, the siting of the facility is another key consideration, as it is imperative to ensure that the facility operates and is utilised efficiently. This will help to ensure that the benefits derived from this project will be fully achieved as often these types of renewable generation projects fail due to poor planning, so the benefits which are attributed during the feasibility and design phases of the project are not realised (Schenk & Stokes, 2013).

Due to the location of the university campuses (see Appendix 1), the best location would be central to these in between the Elizabeth Gaskell and Didsbury campuses. This would be advantageous because:

This location is away from the city centre.
The site is located near several main roads, so access and egress would be easy.
The location of the site is central to most of the campuses, thus waste could be easily collected and transferred to the site.
The operation of the facility could be monitored by existing staff on campus, providing they received the appropriate training.
This location could enable the plant to be utilised for other purposes such as, providing district heating or power to university buildings.

Therefore, each of the above factors should be considered during the operational design of the facility (Spencer, 2013).

Operations design

The design of the anaerobic digestion facility needs to consider a number of factors, these are:

The existing land use in the area of the proposed site.
The sensitive receptors which may be located near the site.
The transport infrastructure surrounding the site.
The expected lifetime of the facility.
The anticipated operating hours of the facility.
The waste tonnage to be treated.
The building footprint and height.
The storage of waste on the site.
Vehicular movements to and from the site.
The planning requirements.
Planning conditions which may be imposed by the Local Authority.

Each of these needs to be considered during the design of the new facility as they may affect its operational capacity (Spencer, 2013).

Capacity planning

The capacity of the facility will need to be carefully planned to ensure that there is an optimal return on the investment that the university is making (Spencer, 2013).. According to the Estates Management Statistic 2011-2012 (EMS, 2012) Manchester Metropolitan University currently has 29, 850 full time students and the total waste produced 8746 tonnes of which 7501 tonnes is recycled and 1010 tonnes of waste is used to create energy (EMS, 2012). This means that the facility needs to have the capacity to recycle 235 tonnes of waste per annum, which is not enough to support the running of a small anaerobic digestion facility (SEPA, 2013). The minimum amount of waste required for a small plant is 417 tonnes of waste per month.

Therefore, the capacity required would not be met; however, the university could consider sending its recycled waste to this facility. If this was a viable option this would mean that 644 tonnes would be available on a monthly basis so the capacity of the plant would be met (SEPA, 2013). This would not be affected by the reduction in waste that is to be diverted from landfill, in fact this may increase the amount of material sent to the facility. This would enable the university to achieve their zero waste to landfill target by 2020/2021 (MMUER, 2013).

Resource management

Resources would have to be provided to ensure that the plant was run efficiently. However, it is believed that this may be achieved by redeployed existing staffs who work at the university. This is because a small plant would only require two workers and a manager to maintain its operations (SEPA, 2013).

Financial planning

The costs of setting up a small facility would be expensive (Spencer, 2013).. A number of factors would need to be considered, such as:

The cost of the real estate.
Planning and design costs.
Construction costs.
Maintenance costs.
End of life disposal costs.
The cost of the plant.

Each of these would need to be calculated, as an approximation the costs could be:

The cost of the real estate – ?500, 000
Planning and design costs – ?200,000
Construction costs – ?350, 000
Maintenance costs (over 25 years) – ?150,000
End of life disposal costs – ?200, 000
The cost of the plant – ?400, 000.

Therefore an estimation of the total cost of implementing this could be as much as ?1.8 million. Furthermore, a number of other costs would also need to be considered, such as:

Monitoring requirements (HMSO, 1993).

Transportation of the waste (HMSO, 2012).
Costs of waste licenses (HMSO, 2012).
Training for staff.
Awareness programmes for students and staff.

Therefore, the costs would be approximately ?2 million over the 25 year life span of the plant, so to make the investment viable the payback per annum needs to be more than ?80, 000.

This could be achieved by reducing the costs of waste which are sent for recycling, assuming that the rate per tonne is approximately ?5. This would generate a saving of ?37, 505 per annum. Additionally, the cost of sending all wastes to landfill may be factored into this, assuming that this costs ?7 per tonne, this could lead to savings of ?1645 per annum.

If the plant was designed to produce electricity and to communally heat some of the university premises this would also lead to further saving (Spencer, 2013).. However, it is difficult to estimate these savings as the type of waste inputted into the plant would affect the energy and heat which could be reused from the plant. However, with rising energy costs, it is thought that the benefits of this may outweigh the costs as they will lead to a reduction in:

The cost of carbon permits under the Carbon Reduction Commitment (CRC) (HMSO, 2011).
The university’s carbon footprint from energy used by the university.
The cost of disposing of waste through contractors.
The amount of waste which is sent to landfill.
The amount of Climate Change Levy which is paid (HMSO, 2013).
The amount paid for energy.

Further to these other options could be explored to ascertain if this would be costs effective such as:

Revenue generated from Feed in Tariffs (HMSO, 2012 a).
Revenue generated from the Renewable Heat Incentive (HMSO, 2012 b).
Revenue generated by taking waste from other businesses near the proposed site.

For this purpose of this report it has been presumed that the income and savings that will be generated from the above will be ?50,000 per annum.

Cost benefits

From the costs section above the estimated costs of the development of the facility would be approximately ? 2 million. In order, for the development to be financially viable ?80,000 per annum would need to be generated over 25 years to pay back this investment.

Based on the savings that have been outlined above, it is believed that ?50, 000 per annum could be generated through general cost reductions, ?37, 505 per annum could be saved by sending all recycled materials to the plant and ?1645 per annum could be saved by sending landfill waste (which is not already used to produce energy from waste) to the new facility.

Therefore, over the 25 year life span of the facility a potential ?2,228,750 could be saved. If this is offset against the estimated cost of the facility which is ?2,000,000 over its life span a profit of ?228, 750 could be made by implementing this project.

Therefore, it is considered that the benefits of investing in an anaerobic digestion facility are viable.


Based on the costs and all the benefits outlined above it is recommended that the scheduling of this project is undertaken as follows:

From May 2013- July 2013 suitable sites are investigated.
From July 2013 – August 2013 feasibility of these sites is investigated.
From August 2013 – October 2013 a site for the new facility is procured.
From October 2013- December 2013 contractors are chosen and the design and planning for the facility are started.
From December 2013 – December 2015 the facility is built.
From December 2015 – March 2015 the facility is made operational.

In addition to this the schedule for the operation of the facility needs to be considered (Spencer, 2013). It is suggested that there are a maximum number of four deliveries of waste per day, as this will ensure that the plant is able to be continuously supplied to waste so that it will run at its optimal capacity (SEPA, 2013).

Loading and timetabling

The operational hours of the facility will need to be 24 hours a day, 20 days of the month on weekdays from 07.00 to 17.00 (SEPA, 2013). This will help to ensure that the plant runs efficiently and that waste will not build up or need to be stored on site (Spencer, 2013). In addition, this will allow the minimum throughput of 417 tonnes of waste per month to be achieved (SEPA, 2013).

Performance measurement

The performance of the plant may be measured through a number of metrics, such as;

The reduction in the costs of carbon permits under the Carbon Reduction Commitment (CRC) (HMSO, 2011).
The reduction in the university’s carbon footprint.
The reduction of the costs of disposing of waste through contractors.
The reduction of the amount of waste which is sent to landfill.
The reduction in the amount of Climate Change Levy which is paid (HMSO, 2013).
The reduction of the amount paid for energy.
Revenue generated from Feed in Tariffs (HMSO, 2012 a).
Revenue generated from the Renewable Heat Incentive (HMSO, 2012 b)
Revenue generated by taking waste from other businesses near the proposed site.
The payback that the new facility generates per annum.
The emissions to air from the facility.
The number of complaints about the operation of the facility.
The number of vehicle movements to and from the facility.
The amount of time in-between collections from campus and the processing of the waste.

Each of these metrics may be utilised to measure the quality of the process and service performance of the new facility.


The procurement process that shall be used for this project will need to be aligned with European Union procurement regulations and they will need to demonstrate best value for money.


In conclusion the analysis that has been undertaken in this report indicates that the third option, which is to build an anaerobic digestion facility in a centralised location,is viable. Therefore, the business strategy that was proposed should be implemented as this will enable the university to reduce its costs in relation to waste disposal and to attain its targets which are;

Reuse and recycle waste by 40% in 2012/13 – 2013/2014.
Achieve zero waste to landfill by 2020/2021 (MMUER, 2013).

Therefore, it is recommended that the university should seriously consider investigating this option further as the number of benefits that have been identified in this report show that this proposal warrants serious consideration.

Appendix 1 Maps of the locations of Manchester Metropolitan University

(MMU, 2013)


Estates Management Statistics (EMS) (2012) Environmental Information 2011/2012. Available from (Accessed 02/05/2013)

Glew, D., Stringer, L. C., & McQueen-Mason, S. (2013). Achieving sustainable biomaterials by maximising waste recovery. Waste Management.

Higher Education Funding Council for England (HEFCE) published their report Carbon reduction target and strategy for higher education in England in January 2010,65921,en.html

HM Revenue and Customs (HMC) (2013) A General Guide to Landfill Tax. Available from (Accessed 02/05/2013)

Her Majesty’s Stationary Office (HMSO) (1993) Clean Air Act. Available from (Accessed 02/05/2013)

Her Majesty’s Stationary Office (HMSO) (2012) The Controlled Waste (England and Wales) (Amendment) Regulations 2012. Available from (Accessed 02/05/2013)

Her Majesty’s Stationary Office (HMSO) (2011)The CRC Energy Efficiency Scheme (Amendment) Order 2011. Available from (Accessed 02/05/2013)

Her Majesty’s Stationary Office (HMSO) (2013)The Climate Change Levy (General) (Amendment) Regulations 2013. Available from (Accessed 02/05/2013)

Her Majesty’s Stationary Office (HMSO) (2012a) The Feed in Tariffs Order 2012. Available from (Accessed 02/05/2013)

Her Majesty’s Stationary Office (HMSO) (2012b) The Renewable Heat Incentive Scheme (Amendment) Regulations 2012. Available from (Accessed 02/05/2013)

Her Majesty’s Stationary Office (HMSO) (2008) Climate Change Act 2008. Available from (Accessed 02/05/2013)

Her Majesty’s Stationary Office (HMSO) (2012) Waste (England and Wales) (Amendment) Regulations 2012. Available from (Accessed 02/05/2013)

Manchester Metropolitan University Environmental Recycling (2013) Recycling Facilities in Your Building. Available from (Accessed 02/04/2013)

Manchester Metropolitan University (2013) How to find us. Available from (Accessed 02/05/2013)

Murto, M., Bjornsson, L., Rosqvist, H., & Bohn, I. (2013). Evaluating the biogas potential of the dry fraction from pre-treatment of food waste from households. Waste Management.

Schenk, T., & Stokes, L. C. (2013). The power of collaboration: Engaging all parties in renewable energy infrastructure development. Power and Energy Magazine, IEEE, 11(3), 56-65.

Scottish Environmental Protection Agency (SEPA) (2013) Anaerobic Digestion. Available from (Accessed 02/05/2013)

Spencer, J. D., Moton, J. M., Gibbons, W. T., Gluesenkamp, K., Ahmed, I. I., Taverner, A. M., & Jackson, G. S. (2013). Design of a combined heat, hydrogen, and power plant from university campus waste streams. International Journal of Hydrogen Energy.

Free Essays

Space meets knowledge The impact of workplace design On knowledge sharing ?


An examination of the role the physical workplace plays in creating opportunities and barriers that influence knowledge management has become a matter of substantial debate. Design of good workplaces for knowledge sharing is considered a major challenge for any organisation. This study provides an insight into the impact of the design and use of the physical workplace on knowledge sharing. Evidence presented in this study substantiates the position that the physical presence of an employee has the potential to impact performance and knowledge management. This assessment will be of use to researchers seeking to further examine the area of knowledge management.


Knowledge management, described as the intentional management of information has become increasingly important to organisations (Nonaka and Takeuchi, 1995; Alavi, 1997; Garvin, 1997; Wiig, 1997; Davenport and Prusak, 1998; Ruggles, 1998; Hansen, 1999; Zack, 1999a). In large part this has been fuelled by the exponential growth of the knowledge economy and the increasing number of knowledge workers who have become as essential for many firms competitiveness and survival (Tallman and Chacar 2010). For many emerging organisations face to face contact is essential in the dissemination of knowledge within that infrastructure (Ibid). The process of internal knowledge management is a dynamic element that must be maintained in order to produce results.

Literature Review

Knowledge is defined as a dynamic human or social process that allows a justification of personal belief as regards the truth (Nonaka 2011). Interaction between people, employees and consumers is one of the primary methods of communicating innovative and inspirational progress. Modern studies in the field of knowledge management have begun to shift focus from the importance of the physical workplace to those engaged in knowledge work (Becker 2004). The recognition of inherent value in the employee base adds incentive to capitalize on the low cost innovative opportunities that knowledge sharing creates (Tallman et al 2010). With critical insight established through the direct contact of the employees, the means of communication becomes a critical concern (Dakir 2012). International companies are recognizing this same value of face to face interaction as the social interaction between management sections, benefits production and development levels world-wide (Noorderhaven and Harzing 2009).

In their discussion of social capital, Cohen and Prusak (2001) emphasise the importance of the physical workplace for the exchanging of knowledge, specifically the distribution of ideas amongst individuals in a situation where they could not assume that others knew what they were required to know. Becker (2004) hypothesises that the choices an organisation makes about how space is allocated and designed directly and indirectly shapes the infrastructure of knowledge networks – the dense and richly veined social systems that help people learn faster and engage more deeply in the work of the organisation. This corresponds with the Dakir (2012) argument that technology is no substitute for live interaction among the members of the organization. Davenport et al (2002) undertook a study among 41 firms that were implementing initiatives to advance the performance of high-end knowledge workers who were regarded as critical to the company’s aims. They focused upon determining the elements that affected the knowledge work performance. Surprisingly, the issue that was most frequently dealt with by these firms involved the physical workplace – “the other common ones were information technology and management” (Davenport 2005, p. 166).

Davenport (2005) emphasises that the recognition of the importance of knowledge work has grown in recent years, but that our understanding of the physical conditions in which knowledge can flourish has failed to keep pace. The inclusion of emerging communication technology has been argued to provide a better opportunity for employee interaction (Rhoads 2010). This same element of improved long distance communication is credited with diminishing the valued impromptu inspiration that many firms rely on during day to day operations (Denstadli, Gripsrud, Hjortahol and Julsrud 2013). According to Davenport et al (2002) workplace design should be seen as a key determinant of knowledge-worker performance, while we largely remain in the dark about how to align ‘space’ to the demands of knowledge work. Davenport (2005) emphasises the point that “there is a good deal said about the topic, but not much known about it” (p. 165). Most of the decisions concerning the climate in which work takes place have been created without consideration for performance factors. This fact continues to diminish opportunities for in-house knowledge sharing and effective dissemination of intelligence (Denstadli et al 2013).

Becker (2004) points out that the cultivation of knowledge networks underpins the continuing debate about office design, and the relative virtue of open versus closed space. Duffy (2000) confirms these views when he admits that early twenty-first-century architects “currently know as little about how workplaces shapes business performance as early nineteenth-century physicians knew how diseases were transmitted before the science of epidemiology was established” (p. 371). This makes every emerging decision regarding effective knowledge sharing critical to the development of any organisation.

Deprez and Tissen (2009) illustrate the strength of the knowledge sharing process using Google’s approach: “one company that is fully aware of its ‘spatial’ capabilities”. The spatial arrangements at Google’s offices can serve as a useful example of how design can have a bearing on improving the exchange of knowledge in ways that also add value to the company. The Zurich ‘Google engineering’ office is the company’s newest and largest research and development facility besides Mountain View, California. In this facility, Deprez and Tissen (2009) report: “Google has created workspaces where people literally ‘slide into space’ (i.e. the restaurant). It’s really true: Google Is different. It’s in the design; it’s in the air and in the spirit of the ‘place’. It’s almost organizing without management. A workplace becomes a ‘workspace’, mobilizing the collective Google minds and link them to their fellow ‘Zooglers’ inside the Zurich office and to access all the outside/external knowledge to be captured by the All Mighty Google organisation” (2009, p. 37).

What works for one organisation may not work for another and this appears to be the case in particular when it comes to Google (Deprez et al 2009). Yet, some valuable lessons in how the workplace can be used to good effect can be gained from Google’s operations. For this precise reason, research was carried out at Google Zurich to provide both theoretical and managerial insights into the impact of the design and use of the physical workplace on knowledge sharing (Ibid).

Studies comparing the performance of virtual and co-located teams found that virtual teams tend to be more task oriented and exchange less social information than co located ones (Walther & Burgoon 1992; Chidambaram 1996). The researchers suggest this would slow the development of relationships and strong relational links have been shown to enhance creativity and motivation. Other studies conclude that face-to-face team meetings are usually more effective and satisfying than virtual ones, but nevertheless virtual teams can be as effective if given sufficient time to develop strong group relationships (Chidambaram 1996). This research implies the importance of facilitating social interaction in the workplace, and between team members (virtual and co-located) when the team is initially forming. Hua (2010) proposes that repeated encounters, even without conversation, help to promote the awareness of co-workers and to foster office relationships. McGrath (1990) recommends that in the absence of the ability to have an initial face-to-face meeting other avenues for building strong relationships are advised to ensure the cohesiveness and effectiveness of the team’s interaction. So although interaction alone is not a sufficient condition for successful collaboration, it does indirectly support collaboration. Nova (2005) points out that physical proximity allow the use of non verbal communication including: different paralinguistic and non-verbal signs, precise timing of cues, coordination of turn-taking or the repair of misunderstandings. Psychologists note that deictic references are used in face-to-face meetings on a regular basis, which refers to pointing, looking, touching or gesturing to indicate a nearby object mentioned in conversation (Ibid).

Newlands et al (2002) analysed interactions of two groups performing a joint task in either face-to-face or a video conference system. They found that deictic hand gesture occurred five times more frequently in the face-to-face condition the virtual interaction. More recent research has found that extroverts gesticulate for longer and more often in meetings than introverts (Jonnson 2006). Barbour and Koneya (1976) famously claimed that 55 per cent of communication is non-verbal communication, 38 per cent is done by tone of voice, and only 7 per cent is related to the words and content. Clearly non-verbal communication is a key component of interaction and virtual interaction systems need to replicate this basic need, especially in the early stages of team forming or when the team consists of a high proportion of extroverts. The physical co-location of teams also facilitates collaboration (Ibid). A seminal piece of research carried out by Allen (1977) demonstrated that the probability of two people communicating in an organisation is inversely proportional to the distance separating them, and it is close to zero after 30 metres of physical separation. Furthermore, proximity helps maintain task and group awareness, because when co-located it is easier to gather and update information about the task performed by team members (Dakir 2012).

A recent survey of workers at highly collaborative companies found that most “collaborative events” are short (with 34% lasting fewer than 15 minutes) and the majority take place at the desk (Green 2012). It is likely that these impromptu interactions relate to sharing information (perhaps on the PC) or answering queries rather than lengthy intense discussion and development of joint ideas. Interactions at desks may facilitate tacit knowledge sharing by overhearing relevant conversations between team members, but such interactions can also be considered a distraction if not relevant (Denstadli et al 2013).


There are two acknowledged methodological approaches: quantitative and qualitative (Creswell 2005). The quantitative method involves identifying variables in a research question which are then utilized in order to collate numerical data (Ibid). The qualitative research is open to interpretation allowing personal answers to be incorporated into the study (Creswell 2005). The researcher considered both options in order to complete the necessary goals.

Types of Data

There are two forms of data: primary, or newly generated data, or secondary, previous data generated within existing studies (Creswell 2005). This study required the acquisition of primary data creating the need for relevant instruments. A survey with 5 open-ended questions has been created and subsequently conducted with centred on 548 employees working at Google Zurich. This was done in order to explore the perceptions of Google employees with regard to the environment in which they work with a focus on factors that affect knowledge sharing in the work environment.

Methods of Data Collection

The qualitative data analysis employed a Content Analysis technique to reveal participant perceptions of their work environment. The survey questions were designed to explore employee perceptions regarding the following dimensions:

1) Activities that allow for increased exchange of knowledge;

2) Advantages of frequent interaction with colleagues;

3) Individuals or groups dependent on the frequent interaction with co-workers orgroup members;

4) Factors that facilitate interaction within the workplace

5) Factors that inhibit interaction with others in the workplace.

Survey participants responded to five open-ended questions and rated their answers using a five-point Likert scale where 5 was ‘most important’. Using a Content Analysis approach (Creswell 2005; Leedy and Ormrod 2005; Neuendorf 2002), the interview responses were analysed. Content Analysis is a qualitative data reduction method that generates categories from key words and phrases in the interview text; it is an evidence-based process in which data gathered through an exploratory approach is systematically analysed to produce predictive or inferential intent (Creswell 2005). Content Analysis was used to identify themes or common concepts in participants’ perceptions regarding the culturally and environmentally distinctive factors that affect interaction in the workplace (Neuendorf, 2002). This process permitted the investigator to quantify and analyse data so that inferences could be drawn.

The Content Analysis of survey interview text was categorically coded to reflect various levels of analysis, including key components, words, sentences, or themes (Neuendorf 2002). These themes or key components were then examined using relational analysis to determine whether there were any relationships between the responses of the subjects. The analysis was conducted with Nvivo8® software which enables sorting, categorising, and frequency counts of invariant constituents (relevant responses). Content Analysis was used to critically evaluate the survey responses of the study participants, providing in-depth information regarding the factors related to workplace interaction.

Sample Respondent Characteristics

The invited population consisted of 675 individuals and a total of 548 individuals participated in the survey resulting in a response rate of 81 per cent. Of these 548 completed surveys, 35 responses were discarded because the respondents only partially completed the survey. The final sample consisted of 513 respondents. The key characteristics of these respondents are summarized in Table 4-1.

Table 4-1 Sample Respondent Characteristics

EducationHigh School

Bachelor Degree

Certificate Degree

Master Degree

PhD Degree






Tenure< 2 years

2-5 years

> 5 years153


Time Building Use< 1 year

1 year

2 years

> 2 years140



Time Desk Use< 3 months

3-6 months

7-12 months

> 12 months143



Age< 20 years

21-30 years

31-40 years

41-50 years

> 50 years0






MobiltyZurich Office

Other Google Office

Home Office








Sales and Marketing







United States



United Kingdom







Russian Federation

< 10 respondents73














Survey Findings

In order to provide an audit trail of participant responses to the thematic categories that emerged from the data analysis, discussion of the findings precedes the tables of data, within a framework consisting of the five survey questions. An overall summary is provided at the conclusion of the discussion of findings. During the analysis of data, common invariant constituents (relevant responses) were categorically coded and associated frequencies were documented. Frequency data included overall frequency of occurrence as well as frequencies based on rating level (5 = most important to 1 = least important). Invariant constituents with a frequency of less than 10 were not included in the tables. Study conclusions were developed through an examination of the high frequency and highly rated invariant constituents in conjunction with the revealed thematic categories.

Question 1: Main Activities that Allow Exchange of Knowledge

Table 4-2 provides high frequency invariant constituents (relevant responses) by survey participants demonstrating themes within the data for Question 1. Thematically, the analysis revealed the following primary perceptions of participants in terms of main activities that allow knowledge exchange: (a) meetings of all types; (b) whiteboard area discussions; (c) video conferencing; (d) email, and (e) code reviews. These elements demonstrated a high frequency of importance ratings, and a moderate percentage of respondents rated these elements as ‘most important’ (rating 5). Other themes revealed through the analysis included the importance of writing and reading documentation, Instant Messaging (IM) text chat, Internet Relay Chat (IRC), and extracurricular/social activities. All other invariant constituents with a frequency of greater than 10 are shown in Table 4-2.

Table 4-2 Data Analysis Results for Question 1: Main Activities Allowing for Exchange of Knowledge

Invariant ConstituentOverall number (Frequency)By Rating

5=Most important
Informal discussion/face to face mtgs/stand ups35114977603332
Formal planned meetings/conference room mtgs2184061563823
Whiteboard area discussions/brainstorming5822131094
Video Conferencing (VC)5841620144
Code Reviews515162046
Writing/Reading Documentation476813164
IM/Text Chat/IRC4610161073
“Extracurricular Activities” (e.g., pool, socializing, Friday office drinks, etc.)4522151016
Writing/Reading docs specifically wiki pages/sites34210697
Chat (unspecified in person vs. text)3387873
Mailing lists21102522
Shared docs/doc collaboration1703554
Read/write design docs specifically1202505
Telephone/phone conversations1203243

Question 2: Main Advantages of Frequent Interaction with Colleagues

Table 4-3 provides high frequency invariant constituents (relevant responses) by survey participants demonstrating themes within the data for Question 2. Thematically, the analysis revealed the following elements representing the primary perceptions of participants in terms of the main advantages to frequent interaction with colleagues: (a) knowledge and information exchange and transfer; (b) staying current on projects and processes; (c) social interaction; (d) learning from others; (e) faster problem resolution; (f) efficient collaboration; and (g) continuous and early feedback. The following themes received a high frequency of importance ratings and a large percentage of ‘most important’ and ‘important’ ratings (rating 5 and 4, respectively) included: knowledge sharing, staying in touch and up to date, learning from others, faster resolution/problem solving, better collaboration, and feedback. Although socialising was revealed to be a strong overall theme, it also demonstrated lower importance ratings. Other themes revealed through the analysis are provided in Table 4-3.

Table 4-3 Data Analysis Results for Question 2: Main Advantages of Frequent Interaction

Invariant ConstituentOverall number (Frequency)By Rating

5=Most important

Knowledge sharing/exchange of information/Knowledge transfer149753919124
Staying in touch/up to date/ more info on projects and processes11358281782
Socializing/social interaction7451035186
Learning/learning from others/learning new things/increased knowledge base7217281485
Understand problems/needs – faster resolution and quicker problem solving7025241146
Better/more efficient collaboration67428953
Feedback/continuous feedback/early feedback661729893
New and better ideas/flow of ideas/creativity/ brainstorming6525151474
Teamwork/being part of a team/teambuilding5110121892
Get work done/efficiency/speed462613241
Better understanding of what others are doing and how/workloads4415171002
Everyone on same page/shared vision/focus on goals of team32109652
Better personal contact and easy interaction27561123
Avoid misunderstanding/work duplication27810441
Helping others/getting help (when stuck)26391031
Good/happy atmosphere/work environment2412858
Motivate each other/inspiration2151582
Other/new perspectives/viewpoints18210312
Improving quality of work/performance1615910
Work synchronization1628141
Knowing latest news/innovations1203216
Better communication1011521

Question 3: Individuals or Groups that are Dependent on Frequent Interaction

Table 4-4 provides high frequency invariant constituents (relevant responses) given by survey participants demonstrating themes within the data for Question 3. Thematically, the analysis revealed the following elements representing the primary perceptions of participants in terms of individuals or groups that are dependent on frequent interaction of the participant: (a) my team/project teammates/peers; and (b) managers. The first theme demonstrated a high frequency of importance ratings with a moderate percentage of ‘most important’ and ‘important’ ratings (rating 5 and 4, respectively). Although the theme of managers was revealed to be a relatively strong overall theme, it also demonstrated lower importance ratings. Other themes revealed through the analysis are shown in Table 4-4.

Table 4-4 Data Analysis Results for Question 3: Individual/groups dependent on frequent interaction of participant

Invariant ConstituentOverall number (Frequency)By Rating

5=Most important
My team/project teammates/peers12887191435
All reports/related teams34717442
Engineering teams (various)28188200
Recruiting team/staffing1753630
Geo Teams1576200
Operations teams1423522
All of them1191010
Other engineers using my project/peer developers of my tool1015310

Question 4: Factors Facilitating Easy Interaction

Table 4-5 provides high frequency invariant constituents (relevant responses) by survey participants demonstrating themes within the data for Question 4. Thematically, the analysis revealed the following elements representing the primary perceptions of participants about factors that facilitate easy interaction: (a) common, proximal, and open workspace areas; (b) common functional areas; (c) sufficient and available meeting facilities; (d) excellent communication tools; and (e) video conference facilities. The theme of open and common workspace areas/shared office space demonstrated a high frequency of importance ratings with a very large percentage of ‘most important’ ratings (rating 5). Other revealed themes, particularly the second listed theme, demonstrated relatively high overall frequency, but these themes did not demonstrate the strength of importance that the first theme did. Other themes and invariant constituents revealed through the analysis are shown in Table 4-5.

Table 4-5 Data Analysis Results for Question 4: Factors Facilitating Easy Interaction

Invariant ConstituentOverall number (Frequency)By Rating

5=Most important
Open and Common workspace areas/shared office space/desk locations/sitting together175103342594
Common shared Areas (e.g., Kitchen, play/game rooms, lounges, library, etc.)173406642178
Enough facilities for meetings/availability of meeting and conference areas90192730122
Great communication tools (email, VC, chats, dist. Lists, online docs, wireless, VPN, mobile…)80113014187
Video Conference meeting rooms/facilities78192518124
Onsite lunch/dinner/common dining area (free food and eating together)5071511134
Whiteboard areas for informal meetings431018771
Corporate culture/open culture/ open communication culture431811932
Casual and social environment/open atmosphere36195921
People: easy going, friendly, smart, knowledgeable, willing to help35149336
Social Events2836577
Company calendar/planned ops for meeting/ scheduled meetings1937621
Geographic co-location/same time zone1374200
Travel/trips to other offices1212135
Chat (non-specific text or in person)1124302
IM/internet chat1051112
MOMA/social networking/wiki pages/company docs1010342

Question 5: Factors Inhibiting Interaction with Others

Table 4-6 provides high frequency invariant constituents (relevant responses) by survey participants demonstrating themes within the data for Question 5. Thematically, the analysis revealed a single strong element and several elements with less relevance as inhibiting factors. The physical geographic differences – specifically the time zone differences – were noted by a majority of participants as the most important element that inhibited interaction with others. Study participants perceived their overscheduled and busy work lives, noise levels in their workspaces, and shared work environments to be contributing inhibitory factors with regard to interaction with others. These elements also demonstrated high frequencies of importance ratings with a moderate percentage of ‘most important’ ratings (rating 5). Other themes revealed through the analysis are shown in Table 4-6.

Table 4-6 Data Analysis Results for Question 5: Factors Inhibiting Interaction with Others

Invariant ConstituentOverall number (Frequency)By Rating

5=Most important
Physical Geographic distance/ timezone differences16411536931
Very busy/Overscheduled people/ overbooked calendars/ too many meetings4517161020
Crowded/noisy environment/ noise in shared space33196440
Defective VCs/ VC suboptimal/ VC equipment not working2597720
No meeting rooms available2286620
Too few VC rooms in some locations / lack of available VC rooms1949501
Open Space: no privacy, interruptions/ disruptions1958321
Information overload/ too much email1562610
Large office building/building size and layout/ too many people, difficult to find people15114000
Team split between multiple sites or large distance between team members in same bldg1545420
Need more whiteboards/lack of informal areas with whiteboards1135210
Language barrier: lack of correct English/not knowing colloquial lang. or nuances1151311
Lack of time/deadlines1152121
Different working hours within same time zone1053200


Both the literature and the survey have illuminated interesting facets of the work environment and the need for personal communication. The analysis of the 513 participants’ responses to five open-ended questions from the employee perception survey revealed patterns of facilitating and inhibiting factors in their work environment. Nonaka (2011) clearly illustrates this point with the argument that the communal environment promotes a standard of communication not found in the technological alternatives. Further, the shift away from the organization to the person orientation provides a fundamental benefit to every employee (Becker 2004). With a rising recognition of individual value, the organisation is building employee trust. Participants in this study preferred frequent, informal opportunities for the exchange of knowledge. The opportunity for growth was centred on the capacity to exchange concepts in a free and easy manner (Nonaka 2011). The evidence presented in this study demonstrates that these opportunities were more valued by team members with high knowledge exchange needs. This is line with the increased depth of knowledge and ability to meet technical needs through employee communication (Tallman et al 2010). A combination of professional advice can benefit the entire production and development process. In this study, transactions among participants were often brief, and were perceived to require limited space – often just stand-up space – with noise-regulating options not found in open-office environments. Dakir (2012) demonstrates the environment has the potential to add to or detract from employee communication, making this factor a critical consideration. Spontaneous and opportunistic knowledge-sharing transactions were valued, and technology provided a platform for this type of knowledge exchange to occur. This evidence from the survey corresponds with the literature illustrating that increased communication and sharing in the workplace enhances the entire operation, as well as providing new and fresh opportunities and innovations (Tallman et al 2010).

The research at Google provides further support for the view of some leading companies who strongly believe that having workers in the same place is crucial to their success (Noorderhaven et al 2009). Yahoo’s CEO Marissa Mayer communicated via a memo to employees that June 2013, any existing work-from-home arrangements will no longer apply. Initial studies theorized that the work at home system would provide a better platform for workers, even on a local level (Dakir 2012). Many points of the memo cited in this Yahoo example, parallel the literature presented in this study. Her memo stated (Moyer 2013): “To become the absolute best place to work, communication and collaboration will be important, so we need to be working side-by-side.” This is clearly in line with the Coehen and Prusak (2001) assertion that the physical workplace is a critical element of the dynamic business. “That is why it is critical that we are all present in our offices. Some of the best decisions and insights come from hallway and cafeteria discussions, meeting new people, and impromptu team meetings.” This element of the her reasoning is nearly identical to the argument presented by Dakir (2012), that a successful company do so, in part, by promoting communication and teamwork in the office, the technical alternatives are not enough.

“Speed and quality are often sacrificed when we work from home. We need to be one Yahoo!, and that starts with physically being together….Being a Yahoo isn’t just about your day-to-day job, it is about the interactions and experiences that are only possible in our offices” (Moyer 2013). This section is directly in line with emerging studies citing the vital nature of the interaction and face to face employee contact (Heerwagen et al. 2004).

This study has clearly demonstrated that Mayer is not alone in her thinking; Steve Jobs operated in a similar fashion as well (Davenport et al 2002). Despite being a denizen of the digital world, or maybe because he knew all too well its isolating potential, Jobs was a strong believer in face-to-face meetings. “There’s a temptation in our networked age to think that ideas can be developed by email and iChat,” he said. “That’s crazy. Creativity comes from spontaneous meetings, from random discussions. You run into someone, you ask what they’re doing, you say ‘Wow,’ and soon you’re cooking up all sorts of ideas” (Isaacson, 2011, p. 431). This assertion by Jobs closely resembles the argument presented in the Rhoads (2010) study that found a clear correlation between the communication capacity and opportunity for successful innovation and progress. Following this philosophy led Jobs to have the Pixar building designed to promote encounters and unplanned collaborations.Mayer’s former colleague at Google agrees (Ibid). Speaking at an event in Sydney February 2013, Google CFO Patrick Pichette said that teleworking is not encouraged at Google. This reflects the consensus that is emerging that time in the office is not only valuable but necessary to sustained competition in the industry (Denstadli et al 2013). Pichette believes that working from home could isolate employees from other staff.

Companies like Apple, Yahoo! and Google are holding on to (or have started embracing) the belief that having workers in the same place is crucial to their success (Dakir 2012). This appears to be based on the view that physical proximity can lead to casual exchanges, which in turn can lead to breakthroughs for products. Heerwagen et al (2004) illustrates that it is evident that “knowledge work is a highly cognitive and social activity”. Non-verbal communication is complex and involves many unconscious mechanisms e.g. gesture, body language, posture, facial expression, eye contact, pheromones, proxemics, chronemics, haptics, and paralanguage (Denstadli et al 2013). So, although virtual interaction can be valuable it is not a replacement for face-to-face interaction, particularly for initial meetings of individuals or teams. Furthermore, the increase in remote working has indicated that face-to-face interaction is important for motivation, team-building, mentoring, a sense of belonging and loyalty, arguably more so than in place-centred workgroups (Deprez and Tissen 2009).


The role of knowledge management in the workplace has become an increasingly valuable segment of a company’s resources. This study examined the practice of working remotely versus employee interaction in the work place providing many illuminating developments. Despite the early optimism that emerging technology was going to provide the end all to employee work habits have proven less than fully realized. The evidence in this study has continuously illustrated an environment that requires the innovative, face to face interaction in order to maintain a competitive edge in the industry. Further, the very environment that promotes this free exchange of ideals is not adequately substituted by technology. In short, the evidence provided in this study has clearly demonstrated the advantage that the in house employee has over the remote worker.

The impromptu encounters between employees are very often the elements needed for progress. What is clear is that in order for a business to capitalize on their full range of available resources virtually requires, face to face personal interaction in order to fully realize the firms full potential. In the end, it will be the combination of leadership, teamwork and innovation that provides business with the best environment, not necessarily how much technology is available.


Dalkir, K. 2005. Knowledge management in theory and practice. Amsterdam: Elsevier/Butterworth Heinemann.

Denstadli, J., Gripsrud, M., Hjorthol, R. and Julsrud, T. 2013. Videoconferencing and business air travel: Do new technologies produce new interaction patterns?. Transportation Research Part C: Emerging Technologies, 29 pp. 1–13.

Nonaka, I. and Takeuchi, H. 2011. The wise leader. Harvard Business Review, 89 (5), pp. 58–67.

Noorderhaven, N. and Harzing, A. 2009. Knowledge-sharing and social interaction within MNEs.Journal of International Business Studies, 40 (5), pp. 719–741.

Rhoads, M. 2010. Face-to-Face and Computer-Mediated Communication: What Does Theory Tell Us and What Have We Learned so Far?. Journal of Planning Literature, 25 (2), pp. 111–122.

Tallman, S. and Chacar, A. 2011. Knowledge Accumulation and Dissemination in MNEs: A Practice-Based Framework. Journal of Management Studies, 48 (2), pp. 278–304.

Free Essays

What is the impact of road safety on the design and management of road networks?


Road transport is the most common type of transportation worldwide, which inevitably means that traffic accidents, and resulting casualties, are a regular occurrence. Further, the manufacture of cars in recent years, which combine high-speed engines with poor road performance, has a direct correlation with the occurrence of accidents. Consequentially, road safety has become a common interest within all countries throughout the world. In my opinion, road safety can be improved by incorporating relevant geometric, climatic and physical considerations in the design of roads. In addition, the application of an awareness program in education and advertising plays a significant role in strengthening road safety and reducing accidents. On the basis of the foregoing, when one is building a safe road, every factor of safety should be taken into consideration and at every stage of the process, including design.

The main objective of this report is to show the impact of road safety considerations in the design of roads and the management of the road network, and how the aim of decreasing road traffic accidents and casualties influences geometric design, traffic design and structural design in road construction. In particular, geometric design and traffic design are greatly influenced by road safety standards, as evidenced in the geometric design of roundabouts, junctions, and pedestrian and cyclist highways. By relying on a specific case study, this paper will also investigate roundabout design and its interrelation with road safety; for instance, whilst roundabouts are likely safer than intersections because they encourage a reduction in vehicles speed and conflict points, it has been found that roundabouts with signalisation are safer for both cyclists and pedestrians. For these reasons, it is clear that the improvement of road safety requires the inclusion of safety in road design and management procedures.


The road network is a systematic structure, which is constructed on invariable criteria for the purpose of road transportation and designed with certain considerations (such as traffic, climate condition and the environment) in mind. It is used by the majority of people worldwide, which is unsurprising considering the volume of traffic accidents and road related deaths and injuries. Indeed in recent times, this is often seen as a global phenomenon, with the number of road related deaths ranging from between 0.75 and 0.8 million annually[1]. Unfortunately, it also appears that this number is increasingly rising; indeed, a 2008 publication of the World Health Organisation (“World health statistics”) estimated that the death rate from traffic accidents globally is 2.2%, and that due to the manufacture of car engines capable of higher speeds and the development of the economy in developing countries, it is anticipated that this figure will dramatically increase to about 3.6% by 2030[2]. Likewise, road traffic accident costs are expected to increase.

There are three main factors which contribute to road traffic accidents: “road and engineering deficiencies; road user errors (“human factors”); and vehicle defects”[3]. Indeed, a UK study from the 1970s demonstrated that the human factor plays an unfavorable role in 95% of accidents, whilst 28% and 8% of accidents are at least partly caused by environmental and vehicle shortcomings[4]. For these reasons, it is not logical to focus solely on one single factor. It is clear that the fact that road user errors feature in the majority of accidents proves that the human factor is the principle cause of traffic accidents; however, if the construction of roads was geometrically improved, this may not be the case. Indeed, according to Restructuring road institutions, finance and management engineering[5], engineering is one of four factors that influence road safety (along with enforcement, education, and climate). By focusing on the impact of the engineering factor on road safety improvement, the objective of this report is:

To demonstrate and define the concept of road safety.
To explain the incorporation of safety features in road design and management.

This report consists of 6 parts: methodology; an explanation of road safety, road design, and road management; the impact of road safety factors on the geometric design and management of roads; a presentation of a case study on road intersections, cyclists and pedestrian safety at roundabouts; a discussion; and finally, a conclusion

2. Methodology

To demonstrate the effect of the road safety considerations on road design and management, this paper will investigate road intersections through a case study linked to geometric design, and then discuss the safety of cyclists and pedestrians in relation to roundabouts. See Figure 1.

3. Road safety

According to Oxford Wordpower Dictionary[1], safety is defined as “the state of being safe; not being dangerous or in danger”, whilst road safety is defined as “the prevention of road accidents”. The purpose of roads is to provide facilities for safe travel and transport, and improved road safety can be achieved in the design and management of road management by incorporating safety orientated “design criteria, design values and interventions”[2]. Such an approach could not only lead to a decrease in road related deaths and accidents, but it could also make roads more accessible. Indeed, as outlined in the DTMRQ manual[3], such an outcome can be achieved with the application of certain factors:

Improving road network safety using a risk management approach;
Designing for safer travel for all road users;
Providing safer access to the road system for cyclists and pedestrians;
Ensuring work site safety; and
Co-ordinating with other government agencies in partnership.

As stated above, road users errors is the main factor which contributes to road accidents. However, it has been observed that the enhancement of engineering design and management can influence drivers’ behavior positively and reduce the number of such errors[4]. It should be noted that no road is absolutely safe and that the safety of a road is often measured on the volume of accidents on it. For that reason, it is logical to indicate that the construction of a road involves the use of a nominal safety level[5]

4. Road design:

According to Oxford Wordpower Dictionary[6], design is defined as “to plan and make a drawing of how something will be made”. The three aspects of design that must be considered in the construction of roads are geometric design (which relates to physical elements such as “vertical and horizontal curves, lane widths, clearances, cross-section dimensions, etc”[7]) traffic design and structural design. Good road design standards involve a combination of these three variable aspects to produce efficient and safer road.

4.1 Geometric design:

Road geometric design involves horizontal and vertical alignment and road cross-section, with the determination of these elements based on the criteria of road safety[8]. The reduction of the road accident rate is significantly influenced by these elements meaning there is a clear relationship between road design and road safety. For example, it has been found that junctions that are geometrically designed with road safety in mind see a smaller number of road accidents. Sound geometric design can involve a reduction in the number of conflict points (with the construction of channels). Indeed, it has been found that the use of roads with two lanes, which are each 3.7m wide, are safer than roads with one lane that is 2.7m wide[9]. In addition, it is felt that the presence of the median reduces the cross-median accident rate, even where it is narrow, and that the inclusion of safety fences at the outer edge of roads plays a significant role in road safety[10].

Road Management:

According to Robinson (2008)[11], road management is defined as “a process that is attempting to optimise the overall performance of the road network overtime”. This involves action that affects or can affect the road network quality and efficiency during the service lifespan and which facilitates trade, health protection, and education by enhancing accessibility. Further, the improvement of road efficiency, effectiveness and safety can lead to increasing economic well-being as a result of lower commodity prices. Road management is affected by a number of factors, but the dominant is “accident levels and costs”, which is directly related to road users and economic infrastructure[12]. As a consequence, road management action can involve the policing of vehicle speed in order to improve safety. Additionally, it can also include such activities which are conducted on the road itself and the surrounding environment, such as road maintenance. As Robinson (2008) states, the aim of road maintenance is to make roads safer because it contributes to the geometric factors in the areas of:

Pavement and footway surface;
Carriageway marking and delineation; and
Signs, street lights and furniture.[13]

In this way, road safety can be incorporated in road management; for example, the continuous repair of pavements reduces vehicle operating costs to be reduced and the rate of accidents on the road.

Road intersections

Road intersections are a significant part of the road network structure, and in spite of their simple function, they contributes more than 20% of fatal road accidents in the EU[14]; and even though it has been reported that about 31% of serious accidents occur in non-built-up areas, 65% occurred at built-area junctions in 1984 in the UK[15]. According to the Federal Highway Administration (2006)[16], road intersection safety has become a considerable problem in the USA because more than 45% of approximately 2.7 million crashes that occurred there in 2004 happened at junctions. Unfortunately, despite the fact that junction design and traffic standards have seen a significant improvement generally, it has not caused a significant reduction in the rate of accidents per year. For those reasons, the FHWA supported the concept of converting intersections to roundabouts in order to decrease the rate of accidents and to provide increase safety.

Rate of fatal casualties in EU at junctions and other locations of roads

Case study

A study was carried out in 8 States of the USA in 2004 for 24 junctions before and after conversion to roundabout. It resulted in a 39% reduction of overall crash rates, with a 90% and 76% reduction in the fatal and injury crashes, respectively[1]. See Table 1.

Reduction of crashes following roundabout conversions at 24 U.S. junctions

In 8 states in USAReduction In Crashes %
In 2004OverallFatalInjury

Table 1: the information from FHWA, 2006


The reduction in the level of road traffic accidents in the case study proves that replacing junctions with roundabouts is the logical decision in the USA because it is clear that such a course of action increases overall safety. Unfortunately, the study sample is small as it does not cover all safety aspects, and the safety of the cyclist and pedestrian is not clarified because the crashes categorized are only based on motor vehicles. It should be noted that approximately 75% of cyclist accidents occur at roundabouts[2]. For that reason, the impact of roundabouts on passengers and cyclists is worthy of investigation.

8.1 Roundabout and road design

According to Fortuijn (2003)[3], the majority of cyclist-car accidents occur when a cyclist is circulating in the roundabout and a car either enters or exits from the roundabout. It has also been said that roundabouts that are charactarised with a significant design feature (e.g. a requirement to reduce vehicle speed to 30phm, use of a central island, a right angle connection between roadways and circular roadways, or a right of way traffic movement) serve to reduce crash rates and cyclist accidents. Another characteristic that improves road safety at roundabouts is the reduction of conflict points to about a quarter of the number utilised at other junctions.

8.2. Roundabout and road management

Modern roundabouts are recognised with high capacity, low speed, and non-use of signalisation. The use of roundabout signalisation is typical dependant on traffic volume and safety. Nevertheless, the roundabouts that don’t use signalisation are still safer than junctions[1]. Further, the maintenance of traffic signs, lights and pavement surface serve to increase road life service and safety.

The manufacture of vehicles with higher speed engines may serve to reduce the efficiency of roundabouts and increase the safety hazards to cyclists and pedestrians, especially at times of high traffic volume. According to the findings of the London Road Safety Unit (2003)[2], the roundabouts with signalisation are safer for both cyclists and pedestrian, based on a study which was conducted in 2003 for a number of roundabouts, before and after signalisation


This report has sought to demonstrate the impact of road safety in design and road management by defining and analysing the relevant concepts, with particular attention paid to cyclist and pedestrian safety. The following points were also concluded:

Road accidents occur due to three main factors: road users, environment and engineering.
The level of road safety measures that are utilized depend on the volume of accidents.
Road safety is incorporated into road design and management through incorporation of safety considerations.
Road safety is improved through road maintenance.
Roundabouts typically serve to reduce vehicle speed and conflict points, which in turn can reduce the road accident rate, and increase the safety of cyclists and pedestrian.
It is believed that the road design and management plays a significant role in road safety enhancement through the interaction of safety criteria with the road efficiency.
Signalisation at roundabouts can increase the safety of cyclists and pedestrians, and a cyclist right of way can reduce the rate of car-cyclist accident

Robinson, R., & Thagesen, B. (2004). Road engineering for development, 2nd ed. Taylor & Francis. London.

Moller, M., & Hels, T. (2008). Cyclists’ perception of risk in roundabouts.Accident Analysis & Prevention, 40(3), 1055-1062. [online] [accessed October 19th 2013]

Fortuijn, L. G. H. (2003). Pedestrian and Bicycle-Friendly Roundabouts; Dilemma of Comfort and Safety. [online], Delft University of Technology, The Netherlands. [accessed October 19th 2013]

Antoniou, C., Tsakiri, M., & Yannis, G. (2012). ROAD SAFETY IMPROVEMENTS IN JUNCTIONS USING 3D LASER SCANNING. [online] [accessed October 16th 2013]

DTMRQ, (2010). Road planning and design manual: design philosophy. [online], Brisbane, Department of Transport and Main Roads of Queensland.[Accessed October 14th 2013].

DTMRQ, (2010). Road planning and design manual: road planning and design fundamentals. [online], Brisbane,Department of Transport and Main Roads of Queensland.[Accessed October 17th 2013].

FHWA, (2006). Priority market-ready technologies and innovations. Problem: intersection crashes account for more than 45 percent of all crashes nationwide. [online], U.S. Department of Transportation, Federal Highway Administration. [accessed 18th October 2013].

Fouladvand, M. E., Sadjadi, Z., & Shaebani, M. R. (2004). Characteristics of vehicular traffic flow at a roundabout. [online] Physical Review E, 70(4), 046132. [accessed October 14th 2013]

Grime, G., 1987.Handbook of road safety research.Bodmin: Butterworths.

Hauer, E, (1999). Safety in geometric design standards. [online], Toronto. [Accessed October 17th 2013].

London Street Management-London Road Safety Unit.

Ministry of Transport, (1966). Roads in urban areas. Ministry of transport: Scottish development department. London.

Oxford Wordpower Dictionary, (2013). Oxford University press,

Persaud, B N and others, (2000). Crash reductions following installation of roundabouts in the United States. [online]. [Accessed 21th October 2013].

Robinson, R, 2008. Restructuring road institutions, finance and management, volume 1: concepts and principles.Totton: University of Birmingham, Birmingham.

Slinn, M., Matthews, P., & Guest, P. (2005). Traffic engineering design. Principles and practice. 2nd ed. Arnold, London. [online] [accessed 20th October 2013]

WHO, (2008).World health statistics. [online], Paris, World Health Organisation. [online] [accessed 14th October 2013].

Free Essays

Research Methodology, Design and Process: Dementia Care


The ability to critically analyse literature is an important skill for evidence-based practice. This literature review aimed to critically analysed literature on dementia care. A search of literature was conducted on academic databases such as Pubmed and CINAHL. Three studies were finally retrieved for this literature review. Each of these studies was critiqued using the Critical Appraisal Skills Programme (CASP) tool for qualitative studies and the critiquing framework of Long et al. (2002). Findings of this literature review could be used to inform current and future community nursing practice. Specifically, this review revealed that music therapy could improve the mood of individuals with dementia and show evidence in improving memory function. While findings could not be applicable to a wider population, nurses could utilise findings and tailor these to the individual needs of their patients.


Evidence-based practice (EBP) is heavily promoted in the NHS since this helps nurses and other healthcare practitioners apply findings of recently published literature to one’s current and future practice. The Nursing and Midwifery Council’s (NMC, 2008) code of conduct also emphasises that healthcare decisions should be evidence-based and supported by published literature and current guidelines. Developing the ability to critically analyse literature is essential when developing evidence-based care (Greenhalgh, 2010; Aveyard, 2014). There is a wealth of information from published literature and current guidelines. Determining the relevance and quality of these findings will help inform nurses whether findings are credible and valid before they are applied to current practice.

As part of community nursing, I am interested in improving my current practice in order to deliver quality care to my patients. The recent policy on community care from the Department of Health (2013a) emphasises the importance of allowing patients with chronic conditions and their carers to self-manage their conditions, achieve self-efficacy and lessen admissions in hospital settings. This policy, “Care in Local Communities-District Nurse Vision and Model’ (Department of Health, 2013a) emphasises the role of nurses in supporting patients and their carers to improve their health outcomes. While it is acknowledged that patients with chronic illnesses may never recover from their condition, nurses have the responsibility to help patients or their carers manage signs and symptoms of the chronic illness. As a nurse in community setting, I have cared for patients with dementia. I saw how this condition impacts the patient’s quality of life and even increase the risk of depression amongst their carers (Talbot and Verrinder, 2009). I always had an interest in caring for patients with dementia. However, I noticed that most pharmacologic treatments have little effect in delaying the progression of cognitive impairments amongst these patients (Miller, 2009). These treatments are also costly and place a considerable burden on the family members and the NHS (Department of Health, 2013b). Hence, I thought that familiarising myself with non-pharmacologic interventions and their effects on cognition or memory of the patient would be important in my role as a community nurse.

A number of non-pharmacologic interventions to preserve memory or delay cognitive decline have been developed in the last two decades. Studies (Spector et al., 2010; Hansen et al., 2006; Vink et al., 2004; Teri et al., 2003) show that these interventions range from motor stimulation, exercise programmes, sensory stimulation and cognitive training. Amongst these interventions, music therapy has been suggested to be least harmful and relatively effective. Some investigators (Fornazzari et al., 2006; Cuddy and Duffin, 2005) have shown that even in patients with severe dementia, music memory seemed to be preserved. However, some studies (Menard and Belleville, 2009; Baird and Samsom, 2009) suggest otherwise and explain that some patients with Alzheimer’s disease (AD) suffer from impaired music memory. One study (Baird and Samson, 2009) however, explained that procedural memory, specifically for musical stimuli are not affected in persons with dementia. With the acknowledgement that most pharmacologic interventions have limited ability to treat the symptoms associated with dementia, it is essential to consider how non-pharmacologic interventions, such as music therapy, alleviate symptoms of this condition. In order to enhance my current and future nursing practice and to increase my understanding on the relevance of music therapy to dementia care, I have decided to research this topic further.

Literature Search

A search of literature from academic databases such as the Cumulative Index of Nursing and Allied Health Literature (CINAHL) and Pubmed was done to retrieve relevant studies. CINAHL indexes more than 5,000 nursing and allied health sciences journals and contains almost 4 million citations. The depth of research articles indexed in this database makes it a database of choice for research on the effects of music therapy on patients suffering from dementia. Meanwhile, Pubmed was also used to search for academic literature. This database also contains millions of citations and indexes nursing and allied health journals.

A quick search for ‘music therapy AND dementia’ was done in Pubmed since this database focuses on nursing and allied health journals. This search yielded 20 articles, most of which were available as full text journals. The same keywords were entered in the CINAHL database. The search yielded 14 articles, with almost all articles available as full text articles. A review of the abstracts of all articles was done to select only primary research studies conducted in the last five years. Polit et al. (2013) state that retrieving journal articles in the last five years will ensure that the most recent evidence is used to inform current and future nursing practice. Literature older than five years old may be outdated. However, this also increases the risk of excluding landmark studies (Aveyard, 2014). For the present review, the selection of studies was only restricted to the last five years to ensure that more recent evidence on music therapy were evaluated and critiqued. There was also no restriction on the place where the studies were conducted since dementia affects people of different ethnicities. Learning from the experiences of other nurses or healthcare practitioners on the use of music therapy for dementia patients would also help improve nursing practice in the UK. The following articles were chosen for critique and evaluation:

Simmons-Stern et al. (2012) ‘Music-based memory enhancement in Alzheimer’s disease: promise and limitations’

Sakamoto et al. (2013) ‘Comparing the effects of different individualized music interventions for elderly individuals with severe dementia’,

Dermot et al. (2014) ‘The importance of music for people with dementia: the perspectives of people with dementia, family carers, staff and music therapists’

As previously stated, I am interested in how music therapy could help me assist my patients delay the progression of dementia and help them and their carers self-manage the signs and symptoms of dementia. Hence, all articles are relevant my work as a community nurse. To critique these studies, the Critical Appraisal Skills Programme (CASP, 2013) tool for critiquing qualitative studies was utilised. For the quantitative studies, Long et al. (2002) critiquing framework for quantitative studies was used. Both critiquing frameworks are easy to use and help researchers investigate the quality and rigour of research articles.

Study 1: Simmons-Stern et al. (2012

A review of the title of the study shows that it reflected the main aim and objectives of the study. The title was concise and provided information to the readers that the study aimed to present the limitations of music-based memory enhancement as well as its possible application to nursing practice. Polit et al. (2013) emphasise the importance of creating a concise and clear title in order not to mislead readers and to inform stakeholders if the article is worth reading. A review was also done on the author’s background and shows that all had extensive background on dementia research and healthcare. This was essential since credibility of the authors’ background could increase the reliability of the findings of the study (Long et al., 2002). However, Hek and Moule (2011) emphasise that the authors’ background is not the sole criterion in assessing the credibility of the findings of the study.

The abstract of the study failed to mention the type of study design used. While the abstract summarises the aims and main findings of the study, it did not follow the usual structure of an abstract in a journal article where the methodology or methods used are explicitly stated. Ellis (2010) reiterates that an abstract should provide a brief summary of the study’s background, aims and objectives, methodology, results and conclusion. Although it was difficult to determine why the researchers of this study failed to present the methodology in the abstract, readers of the study could have benefited from an abstract that states the methodology of the study. Reading of the body of the article would show that the quantitative study design was used. The study aimed to investigate the effects of music on the memory of patients suffering from Alzheimer’s disease, one of the diseases grouped under dementia. Simmons-Stern et al. (2012) made excellent use of literature and related findings from previous studies with the current study.

Apart from the excellent use of literature, there was also a very good review of the previous studies and a gap in literature was clearly presented. Hence, the literature review of the study was well written and provided the readers with good background on why there is a need to carry out the present study. Polit et al. (2013) emphasise that a well-written review of literature should be able to provide context to the study’s aims and objectives and argue why there is a need to address the gaps in literature. Importantly, Simmons-Stern et al. (2012) avoided the use of jargon when writing the paper. Burns and Grove (2013) explain that the use of jargon should be avoided since this excludes readers of the article that have no nursing or medical background. A good paper is one that is written for a general audience and not only for a scientific community (Burns and Grove, 2013). A total of 12 participants who were diagnosed with Alzheimer’s disease and 17 healthy controls gave their informed consents to participate in the study. Brown (2009) states the importance of obtaining the informed consent of participants before commencing the study. This would not only protect the rights of the participants but also ensure that the nurse researchers are observing the Nursing and Midwifery Council’s (NMC, 2008) code of conduct in protecting the patients or participants from harm. Part of obtaining an informed consent is the presentation of the study’s aims and objectives, possible side effects or benefits when participating in the study (Brown, 2009). An informed consent will also ensure that debriefing is provided to the participants to avoid any harm and psychological distress to the participants (Oermann, 2010).

Apart from getting the informed consent, it was also crucial that an ethics committee has evaluated and approved the study protocol. An evaluation of the study reveals that this was observed and an ethics committee approved the study. On reflection, the study has a very small sample size (n=12 experimental group; n=17 control group). This would have taken a randomised controlled study design since a control group was used to compare the effects of music therapy on the patients with a healthy control. However, the investigators specifically state that this study was comparative. An inclusion and exclusion criteria were used when recruiting the patients, suggesting that participants were not randomly selected. Since the study was quantitative and employed the experimental study design, random sampling of the participants who have been more applicable (Crookes and Davies, 2004). It should be noted that it would also be difficult to randomise participants since this study was only conducted in one healthcare setting and it was crucial that participants have developed AD. While randomisation of participants was not observed, it is noteworthy that the investigators stated how many of the participants were excluded from the study and the reasons of their exclusion. This was essential since failure to explain why participants who gave their informed consents to participate in the study but were later excluded in the actual experiment would make the data collection process unclear (Moule and Goodman, 2009).

Despite the small sample size, the demographic characteristics of the two groups were not significantly different when t-test was done. There were no significant differences in prior musical training, formal or informal, years of education and age between the participants of the two groups. This allowed the investigators to determine if there were differences after the study, this might have been due to the intervention employed. After informed consents were taken, the authors of the study declared that they paid the participants for the hours spent during the study. Compensating the participants for the time is considered as ethical since considerable time has been taken away from the subjects for their participation in the study (Hek and Moule, 2011). The interventions were clearly stated. This increased the rigour of the study since a clearly stated research method would help other investigators replicate the methods in future studies and verify whether similar findings are obtained (Hek and Moule, 2011). Simmons-Stern et al. (2012) also specifically outlined the lyrics used and where these were obtained and how music memory of the participants was tested. Results section of the study clearly presented the main findings of the study. Appropriate statistical tests were also utilised to test the hypotheses of the research. Polit et al. (2013) emphasise that statistical tests should be appropriate to the study’s aims and objectives and should rule out any biases in interpretation of the findings.

Despite having a small sample size, the researchers were able to establish that music in patients with AD enhances memory in terms of familiarisation of sung lyrics but not in spoken stimuli. This suggests that in patients with AD, they can enhance their memory when familiarising with the lyrics or listening to music but not when they hear spoken language. There were also no significant differences in the healthy control and experimental groups in terms of memory after hearing the lyrics of a song compared to hearing the lyrics as a spoken stimuli. Since this study has a small sample size, the applicability of the findings to a larger and more heterogeneous population would be difficult (Burns and Grove, 2013). Although a control was used, it should be noted that participants in the experimental group are in the early stages of AD. This could have affected the findings of the study since it is unclear if patients with severe dementia would also yield similar reactions and results. At present, the findings are applicable to the sample population of the study and importantly, only on individuals in the early stages of dementia. While there were several limitations of the study, findings are noteworthy since these show that music therapy is promising as a non-pharmacologic intervention for enhancing memory in individuals with early stage dementia.

The conclusion of the study was clearly presented and summarises the key points presented in the study. Although the discussion states future areas of study, there were no clear recommendations in the conclusion. Specific recommendations could have been made at the end to help future researchers identify areas of investigation. There were also no implications for future nursing and other healthcare practitioners’ practices. Despite the lack of clear recommendations, readers can still read through the study and identify areas that need further investigation. For example, there is a need to replicate the study in a larger and randomly selected sample population to strengthen the validity and reliability of the findings. There is also a need to compare findings with patients suffering from moderate to severe dementia to determine if music still has similar effects on the memory of those in advanced stages of the illness. There are a number of implications of the study in nursing practice. Nurses can use music to help enhance memory or prevent deterioration of memory amongst individuals with early stages of the disease. It is essential to consider the acceptability of music therapy in those suffering from dementia. As a whole, the study was of high quality and effort was made to reduce bias within the study. Although the investigators failed to blind assessors to the study, findings were presented objectively. It is also difficult to blind assessors because of the very small sample size (Burns and Grove, 2013). All investigators were familiar with the background of the participants and blinding them to the intervention was difficult since these investigators were also responsible in implementing the interventions. Finally, there were no conflicts of interest (Polit et al., 2013), ensuring the readers that bias in presentation of findings was avoided.

Study 2: Sakamoto et al. (2013)

An evaluation of the study’s title reveals that it was concise and clearly reflects the study’s aims and objectives. This was essential (Long et al., 2002) since this would present to the readers the main aim of the study. The type of study design chosen to answer the study’s aims was also appropriate. A quantitative study design would help investigators answer the research aims and objectives through experimentation, surveys or a randomised controlled trial (RCT) (Brown, 2009). In Sakamoto et al. (2013), the randomised controlled study design was used. Compared to other quantitative study designs, a RCT reduces risk of selection bias and bias in interpretation of findings (Moule and Goodman, 2009). Selection bias occurs when participants are not randomly selected and do no have equal chances of being assigned to a control or experimental groups (Crookes and Davies, 2004). This is avoided in RCT since all participants are randomly assigned to an experimental or control group. On the other hand, bias in interpretation of findings is lessened especially if investigators and assessors are blinded to the interventions and standard treatment (Oermann, 2010).

A critical analysis of the study shows all participants in the study were randomly assigned to the treatment and standard care groups. However, a major limitation of this study was its relatively small sample size (n=39). It would be difficult to transfer findings to a larger and more heterogeneous group due to the representativeness of the sample population (Ellis, 2010). While it is difficult to transfer findings to other settings due to the relatively small sample size, community nurses may consider the applicability of the findings to their own practice. It is noteworthy that it would be difficult to recruit participants in the advanced stages of dementia since their ability to give their informed consent is severely limited (Department of Health, 2009). Further, their participation requires that their carers or immediate family members are aware of the study’s aims and objectives and should be able to assist the participants during the the study. While an ethics board approved the study and informed consents were taken from the respondents or their representatives (Burns and Grove, 2013), involving individuals who suffer from severe cognitive impairment would be difficult. This also carries some ethical issues since their ability to understand the procedures of the study is compromised (Hek and Moule, 2011). Although the Mental Health Act in the UK acknowledges that carers can act in behalf of the individual with mental health condition, ethics regarding their participation in research studies remains debatable (Department of Health, 2009).

Despite the possible ethical issues surrounding the study, investigators of this study used other means of evaluation to assess the participants’ responses to the interventions. For example, they used the Faces Scale (Sakamoto et al., 2013) to determine the emotions of the participants. A review of the study’s aims and objective shows that these were clearly presented in the beginning of the study. The introduction and review of literature also made excellent use of previous studies. It is also important to note the gaps in practice in recent studies were highlighted in the literature review section (Ellis, 2010). A good literature also argues why there is a need for the new study and how this could be applied to current healthcare practices (Ross, 2012). Methodology and methods used were also appropriate for the research question. Since the study aimed to determine the effectiveness of music therapy, it is appropriate that a RCT is used to compare music therapy with standard care. Comparing music therapy with standard care is ethical (Ross, 2012) since all patients in the study received interventions. It would be unethical to withdraw treatment or assign participants to a control group that would receive no intervention (Crookes and Daives, 2004). The evaluation tools used to measure the responses of the patients were appropriate and have been previously validated and standardised. This was necessary to convey to the readers that validated measurement tools were used in the study (Moule and Goodman, 2009).

A clear description of the research methods was presented. This would allow future researchers to replicate the present study (Oermann, 2010) and determine if similar findings could be observed. This also increases rigour of the study (Burns and Grove, 2013) since it is essential for other researchers to also test the hypothesis of the study and ensure that results are consistent across different healthcare settings. Results of the study were well presented and appropriate statistical tests were used. The discussion section of the study presented the strengths and limitations of the study. Polit et al. (2013) emphasise that presenting the limitations of a study will help inform other researchers on areas that need further improvement and presents areas for further research. Since weaknesses of the study were presented, readers and other healthcare practitioners can determine the extent in which the findings can be applied to current and future nursing practice (Burns and Grove, 2013). The conclusion of the study succinctly captures the main points raised in the research study. This helped the researchers identify the main highlights of the study (Ellis, 2010). However, recommendations for other researchers and areas of improvement of the study were not cited. While the discussion section presented these limitations and areas for future studies, brief recommendations at the end of the study could have added rigour to the research study. Importantly, there were no conflicts of interest. This assured the readers that bias in reporting of data was reduced (Ellis, 2010).

Findings of this study have important implications in nursing practice. All participants received either the passive or interactive music intervention while the control group received no music intervention. There was careful choice of music in the interactive group. For example, healthcare workers assigned to the interactive group helped investigators choose music for the patient participants. Music played during the intervention all had special meaning to the participants. All interventions were given individually for 30 minutes per session at once a week for 10 weeks. Those in the interactive group were allowed to clap, sing or interact with the music. Meanwhile, those in the passive group only listened to the music. The music chosen for the passive group also had special meaning to the participants. Those in the control group sat in silence for 30 minutes during the once a week session. Interestingly, findings show that music associated with special memories led to significant changes in the parasympathetic nervous system of the participants.

Investigators note that music significantly increased relaxation of the individuals immediately after intervention when compared to baseline data. However, these were not noted in the control group. Significant changes were also seen on the emotional states of the participants in the interactive and passive music intervention groups. Music appeared to elicit pleasant emotional states. However, when passive and interactive groups were compared, the latter was significantly more relaxed following the music intervention. It should be noted that patients with severe dementia are more sensitive to environmental stimuli and may experience stress when placed in a new environment (Morris and Morris, 2010). Further, patients with cognitive impairments may express feelings of stress and fear through disruptive behaviour (Morris and Morris, 2010). The difficulty in verbalising their emotional needs could aggravate their responses to their surroundings (Department of Health, 2009). Hence, the study of Sakamoto et al. (2013) may have important implications in nursing care for patients in community settings. Nurses can encourage family members to play music that have special meaning to their loved ones suffering from dementia to illicit positive emotional states. The calming effect of music could be an advantage for patients cared in home or care settings since this would not only prevent stress but also allow patients to enjoy quality of life.

Study 3: Dermot et al. (2014)

A review of the study’s title shows that it also reflects the main aims and objectives of the study. Readers could easily understand that the study explored the experiences of individuals with dementia, their carers, staff and music therapists when music interventions are employed. The CASP (2013) tool for qualitative studies contains three screening questions that should be used to determine if a study is worth reviewing. The study of Dermot et al. (2014) suggests that music can help maintain the person’s interconnectedness and their quality of life. Findings have important implications in nursing practice since music intervention (Miller, 2009) is not costly and could yield positive results for patients suffering from early to advanced stages of dementia. Further review of the study shows that aims and objectives of the research were clearly stated. The main aim of the study was to explore the meaning of music in the lives of individuals suffering from dementia. Investigators of this study state that there is limited knowledge on why or how individuals find music beneficial to their wellbeing. Understanding the role of music according to the perceptions of the patients and their carers will help inform nursing practice on the relevance of music in the lives of people with dementia.

A qualitative research methodology was appropriate for the study’s aims since the research aims to interpret the subjective experiences of individuals with dementia. Parahoo (2006) emphasises that a qualitative study allows researchers to explore the experiences and perceptions of individuals in more detail and depth. Since open-ended questions are used, investigators can use probing questions (Burns and Grove, 2013) to help participants articulate their experiences. One of the strengths of this study was the inclusion of participants’ family members, care home staff and music therapists. Individuals suffering from dementia were recruited from care homes and those living in the community. This allowed Dermot et al. (2014) to compare the perceptions of people with dementia living in care homes or in the community and determine if settings of the individuals impact their experiences with music therapy. Recruitment strategy employed was also appropriate for the research aims. There was also a clear explanation on the methods of data collection. Semi-structured interviews and focus group discussions were done. In the former, this would allow researchers to investigate perceptions of participants in more detail (Parahoo, 2006). However, this requires more time to complete especially if there are many participants in a study. A focus group discussion, on the other hand, requires little resources and could be completed in one setting (Polit et al., 2013). However, if a dominant member would be included in a focus group discussion, interactions would be limited (Burns and Grove, 2013).

This could be avoided with a facilitator who knows how to redirect the discussion to all members of the focus group. A stregnth of the study of Dermot et al. (2014) is the presentation of a rationalisation on why they used a combination of focus groups and in-depth interviews. It should also be noted that participants with dementia might display cognitive impairments, depending on the stage of their illness. Hence, requiring these patients to explain their experiences in more depth might be challenging. However, the investigators tried to mitigate this challenge by including carers of the patients as part of the study participants. Inclusion of carers could provide researchers with more detailed information on how music impacts the wellbeing and quality of life of the patients since these carers are more acquainted with the individuals suffering from dementia (Miranda-Castillo et al., 2010). It is also noteworthy that music therapy was individualised to the patients in the study. This suggests that comparison of music therapy received by the patients was not done. Instead, investigators focused on the impact of music therapy on the patients’ wellbeing. In addition, the study did not take into account the differences in music interventions and whether this shaped the individual’s reaction to music therapy. Despite the differences in music intervention, it was common for the music therapists to use songs that were well-known to the patients. They also supported active music therapy with exploratory improvisation. Dermot et al. (2014), however, failed to explain what is exploratory improvisation or how this was done during music therapy.

There was also an explanation on the content of the guides used for the in-depth interviews and focus group discussions. This was essential to demonstrate the coverage of the interview guides and whether each guide reflects the aims and objectives of the study (Moule and Goodman, 2009). However, the relationship between the researchers and the participants was not thoroughly discussed. If the participants knew the investigators, this might lead to potential bias especially if the researchers hold positions of power (Oermann, 2010). Despite this limitation, Dermot et al. (2014) emphasise that only one facilitator guided the focus group discussions. There were changes in the methods used during data collection. For instance, where a focus group discussion was initially decided, this was then changed to individual interviews in the second group of patients and healthcare workers. Dermot et al. (2014) explain that the severity of dementia of the patients was considered in the choice of data collection. In-depth interviews were used when patients had severe dementia.

There were also sufficient details on how participants were recruited and whether ethical standards were observed. Polit et al. (2013) state that ethics in research is crucial to ensure that the rights of the participants were observed and they were not subjected to undue stress or negative experiences during data collection. Confidentiality was also observed in the study and all participants remained anonymous. Approval was also sought from an ethics board in the community settings. Data analysis of qualitative data could be extensive and time consuming (Parahoo, 2006). Informing readers how data was analysed would help increase the rigour of a qualitative study. Dermot et al. (2014) provided an in-depth description of how data was analysed. Thematic analysis was also used to present the main findings of the study. There was also a clear description on how categories and themes emerged. For instance, the long-table approach was used during analysis of data. Verbatim transcripts were used to support the main themes. This ensures validity and credibility of the main themes generated in the study (Polit et al., 2013). Contradictory data were also taken into account. The researchers also critically examined their own roles in the research process and the potential bias that might arise during analysis of research data.

While respondent was not done, validity and credibility of the data were observed through constant comparison of categories and themes. More than one researcher was involved in the analysis of data. Professors and doctoral students of the Doctoral Programme in Music Therapy were also consulted during thematic analysis and were involved in identifying categories. Importantly, findings were discussed with reference to the original research question. A discussion was also made on the relevance of the study to dementia care. Findings of this study suggest that music is a medium that is readily accessible to patients with dementia. Many of the patients, their carers and healthcare staff admitted that music promotes mental stimulation and is an emotionally meaningful experience. Almost all participants also remarked that song lyrics with personal meanings helped patients remember their personal history. It is also perceived to reinforce personal and cultural identity. Music is also perceived to promote connectedness and building and sustaining of relationships. In addition, music has immediate effects on the mood of the patients. Most of the staff members who participated in the focus group discussions remarked that agitation of the patients decreased as a result of music therapy.

It is also shown to promote a relaxing environment in the care homes. On the other hand, listening to music in the lounge area could be challenging since care home residents might have different music preferences. Hence, it would be a challenge for healthcare workers to address all the music preferences of the patients. Since the study was qualitative, transferability of the findings to a larger and more heterogeneous population is impossible (Polit et al., 2013). However, other healthcare practitioners could use findings to help build a peaceful environment for patients suffering from dementia. A further review of the study also shows that the conclusion summarises the main points raised in the study and provides recommendations for other researchers to consider in similar studies in the future.

Implications of Findings in Nursing Practice

Findings of this literature review could be used to improve nursing practice when caring for patients with dementia. All three studies (Simmons-Stern et al., 2012; Sakamoto et al., 2013; Dermot et al., 2014) included in this literature review demonstrate the impact of music therapy on patients with dementia. Music therapy could improve health outcomes and quality of life of the patients from early to advanced stages of the disease. In the latter, patients who have difficulty communicating their needs, react positively to music therapy. Many of the patients with severe dementia show less agitation when exposed to music that was once relevant to them before they suffered from dementia. This suggests that music therapy could even not only promote positive mood of the patients but might even reconnect them to ‘who they are’ (Dermot et al., 2014). This holds important implications in nursing practice in community settings. Music therapy could be introduced to families caring for a loved one with dementia and could be used to calm the patient, reconnect with their family members and create an environment that is less stressful for the individual with dementia.

The type of music therapy, however, will be dependent on the preferences of the individual (Sakamoto et al., 2013). This is consistent with patient-centred care (Department of Health, 2009) where patient preferences are considered when creating a care plan or introducing healthcare interventions. It is suggested that interactive music therapy (Simmons-Stern et al., 2012; Sakamoto et al., 2013) might be more effective than passive music therapy in improving memory and mood of the patients with dementia. As a community nurse, I need to be aware of the different non-pharmacologic interventions for people with dementia. I can use findings of this review when caring for patients suffering from dementia. Music therapy is relatively easy to carry out and entails very little cost. Importantly, it has positive short and long-term impacts on patient’s mood, memory and quality of life. Hence, considering this type of intervention could also help ease the burden of carers who provide care to these patients on a daily basis. I could use information from this literature when conducting patient education. I can inform my patients and their family members of the benefits of music therapy and the sustainability of this type of therapy over time. I can also encourage family members to consider music therapy to help alleviate the mood of the patients and provide a calm environment.


This literature review has shown the feasibility and promise of music therapy in promoting wellbeing, improving memory and quality of life of patients with dementia. As a community nurse, music therapy could be employed with the help of a music therapist in community settings. Families and carers could be taught on how to use this type of therapy to improve the mood of the patient or to calm the individual when agitated. This type of therapy holds some promise in long-term care for people with dementia. As shown in the review, individuals with severe dementia still have the ability to respond positively to music therapy. However, consideration should still be made on the applicability of the findings of the three studies to a larger and more heterogeneous population. All studies recruited a relatively small sample size that might not be representative of the experiences of a wider group of people with dementia. Although this limits applicability, findings can be tailored to the needs of individual patients. Considerations should also be made on the preferences of the patients and their family members on whether music therapy is acceptable to them. Since there is a need to practice patient-centred care, nurses have to determine if patients or their family members are willing to employ music therapy. It should ne noted that this literature review is only limited to reviewing three studies. Literature on the acceptability of music therapy was not evaluated. Despite this gap in the present literature review, the positive responses generated after music therapy should help patients and their family members consider music therapy.


Aveyard, H. (2014) Doing a literature review in health & social care: A practical guide. 2nd ed. Berkshire: Open University Press.

Baird, A. & Samson, S. (2009) Memory for music in Alzheimer’s disease: unforgettableNeuropsychology Review. 19(1), p. 85–101.

Brown, S. (2009) Evidence-based nursing: the research-practice connection. Sudbury Mass: Jones & Bartlett Publishers.

Burns, N. & Grove, S. (2013) The practice of Nursing Research: Conduct. critique and utilisation. 7th ed., St. Louis: Elsevier Saunders.

Critical Appraisal Skills Programme (2013) 10 questions to help you make sense of qualitative research. England: CASP.

Crookes, P. & Davies, S. (2004) Research into practice. Essential skills for reading and applying research in nursing and healthcare. 2nd ed. Edinburgh: Bailliere Tindall.

Cuddy, L. & Duffin, J. (2005) Music, memory, and Alzheimer’s disease: is music recognition spared in dementia, and how can it be assessedMedical Hypotheses. 64(2), p. 229–235.

Department of Health (2013a) Care in local communities: A new vision and model for district nursing. London: Department of Health.

Department of Health (2013b) Improving care for people with dementia [Online]. Available from: (Accessed: 5 December, 2014).

Department of Health (2009) Living Well with dementia: A National Dementia Strategy. London: Department of Health.

Ellis, P. (2010) Understanding research for nursing students. Exeter: Learning Matters.

Fornazzari, L, Castle, T. & Nadkarni, S. (2006) Preservation of episodic musical memory in a pianist with Alzheimer disease. Neurology. 66(4), p. 610–611.

Greenhalgh, T. (2010) How to read a paper: the basics of evidence-based medicine. West Sussex, UK: John Wiley and Sons.

Hansen, V., Jorgensen, T. & Ortenblad, L. (2006) Massage and touch for dementia. Cochrane Database of Systematic Reviews. 4, p. CD004989.

Hek, G. & Moule, P. (2011) Making sense of research. 4th ed. London: Sage.

Long, A., Godfrey, M., Randall, T., Brettle, A. & Grant, M. (2002) Developing evidence based social care policy and practice. Part 3: Feasibility of undertaking systematic reviews in social care. Leeds: Nuffield Institute for Health.

McDermot, O., Orrell, M. & Ridder, H. (2014) The importance of music for people with dementia: the perspectives of people with dementia, family carers, staff and music therapists. Aging & Mental Health. 18(6), p. 706-716.

Menard, M. & Belleville, S. (2009) Musical and verbal memory in Alzheimer’s disease: a study of long-term and short-term memory. Brain and Cognition. 71(1), p. 38–45.

Miller, C. (2009) Nursing for wellness in older adults. Philadelphia: Lippincott Williams and Wilkins.

Miranda-Castillo, C., Woods, B., Galboda, K., Oomman, S., Olojugba, C. & Orrell, M. (2010) Unmet needs, quality of life and support networks of people with dementia living at home. Health and Quality of Life Outcomes. 8:132 doi: 10.1186/1477-7525-8-132.

Morris, G. & Morris, J. (2010) The dementia care workbook. London: McGraw-Hill International.

Moule, P & Goodman, M. (2009) Nursing Research: An Introduction, London: Sage Publishers.

National Institute for Health and Clinical Excellence (NICE) (2009) Depression: The treatment and management of depression in adults. London: NICE.

Nursing and Midwifery Council (NMC) (2008) The Code: Standards of conduct, performance and ethics for nurses and midwives. London: NMC.

Oermann, M. (2010) Writing for publication in nursing. 2nd ed., Philadelphia: Lippincott Williams & Wilkins.

Parahoo, K. (2006) Nursing Research: Principles, Process and Issues. 2nd ed. New York: Palgrave Macmillan.

Polit, D., Beck, C. & Hungler, B. (2013) Essentials of nursing research, methods, appraisal and utilization. 8th ed., Philadelphia: Lippincott Williams & Wilkins.

Ross, T. (2012) A survival guide for health research methods. Maidenhead: OUP.

Sakamoto, M., Ando, H. & Tsutou, A. (2013) Comparing the effects of different individualized music interventions for elderly indivduals with severe dementia. International Psychogeriatrics. 25(5), p. 775-784.

Simmons-Stern, N., Deason, R., Brandler, B., Frustace, B., O’Connor, M., Ally, B. & Budson, A. (2012) Music-based memory enhancement in Alzheimer’s disease: promise and limitations. Neuropsychologia. 50(14), p. 3295-3303.

Spector, A., Orrell, M. & Woods B. (2010) Cognitive Stimulation Therapy (CST): effects on different areas of cognitive function for people with dementia. International Journal of Geriatric Psychiatry. 25(12), p. 1253–1258.

Talbot, L. & Verrinder, G. (2009) Promoting Health: The Primary Health Care Approach. Australia: Elsevier Australia.

Teri, L., Gibbons, L., McCurry, S., Logsdon, R., Buchner, D., Barlow, W., Kukull, W., LaCroix, A. McCormick, W. & Larson, E. (2003) Exercise plus behavioral management in patients with Alzheimer disease: a randomized controlled trial. Journal of the American Medical Association. 290(15), p. 2015–2022.

Vink, A., Birks, J., Bruinsma, M. & Scholten, R (2004) Music therapy for people with dementia. Cochrane Database of Systematic Reviews. 4, p. CD003477.

Free Essays

Dissertation Research Design

Sample Dissertation Methodology: Quantitative Survey Strategy

1 Research Methodology

1.1 Introduction

This research project has been one of the most thought-provoking and challenging feature of the master’s course. It provides a chance to endorse, simplify, pursue and even explore new facets of one’s research topic. The research approach adopted is an important aspect to increase the rationality of the research according to Cresswell (2007). The research ‘onion’ is a methodology that was developed by Saunders et al (2003).According to the research ‘onion’, as shown in figure 4.1, the entire process is in the form of an onion comprising of various layers. The research philosophy, research approaches, research strategies, time horizons and the data collection method form the different layers of the onion depicting each of the research process. The process involves peeling each layer at a time to reach the centre which is the actual question of the research. For this research philosophy of interpretivism was chosen along with deductive approach and mainly using quantitative techniques for data collection and analysis (Saunders et al., 2009).

The chapter details the research process adopted and continues with an explanation of the data collection and data analysis methods employed by the researcher including a justification for the approach and method.

The sampling method used by the researcher is discussed and justified and the chapter continues with a commentary of the limitation of the study design.

Finally the issues of observer influence are covered as in the ethical approach to the research and a summary of the chapter is presented

1.2 The Research Philosophy

Research philosophy forms the outermost layer of the research ‘onion’. There are three views based on the way knowledge is developed and corroborated. Individuals or groups rely upon their individual experiences, memories and expectations to derive logic from situations occurring in the society. This logic gets revised over a period of time with new experiences which in turns leads to different interpretations. Therefore it is essential to determine and understand the factors that impact, govern and affect the interpretations of individuals.

According to Denzin and Lincoln (2003) interpretivists believe in multiple realities. Hatch and Cuncliffe (2006) have described how interpretivists try to draw meaning from realities and further creat new ones to analyse the different point of views and to validate them against academic literatures. Since the aim is to interpret the thinking of ‘social actors’ and gaining insights using their pointo of views, it cannot be generalised (Saunders et al. 2007). Remenyi et al. (1998) described an interpretivist as one who tries to ascertain the details of the situation with the underlying motive to unearth the working logic behind the situation.

Eriksson and Kovalainen (2008) point out a flaw which researchers need to take care of while adopting the interpretivism. They say that because of the closeness of the researcher and the researched, there is a likelihood of a bias in the interpretation. The solution is self-reflection

This research attempts to ascertain a relationship, if any, between knowledge management framework in an organisation and the behaviour resulting from the knowledge management practices. This approach adopted by the researcher requires to ‘get close’ to the participants and try and throw light on their acumen of the reality. Thus it can be said that the researcher adopts a interpretivism philosophy.

The Research Approach

The next layer of the research ‘onion’ is the research approach. The design of the research project determines the choice of research approach adopted. If the research involves developing a theory and hypothesis (or hypotheses) and design a research strategy to test the hypotheses then the approach classifies as a deductive approach. On the other hand the inductive approach involves data collection and developing a theory based on the analysis of the data.

In an inductive approach a theory follows the data collection where as it is vice versa in case of a deductive approach. According to Saunders et al (2003), researchers in the 20th century criticised the deductive approach stating that deductive approach help establish cause-effect links between specific variable without taking in to account the human interpretation. Saunders et al. (2000) suggest that researcher should be independent of what is being observed, which the deductive approach dictates. Robson (1993) suggests that the deductive approach is a theory testing practice which arises from an established theory or generalisation, and tries to validate the theory in context to specific instances.

According to Jashapara (2004) Knowledge Management, the central topic of the research, has been around since ancient Greece and Rome and it further mentions that knowledge management is growing at an exponential growth with a lot of literature available. As Creswell (1994) suggests that a deductive approach would be a better approach in such a scenario. Since the data collection for this research involves online surveys by professionals, time is a valuable commodity. In a deductive approach, data collection is less time consuming and works on a ‘one take’ basis, which is also beneficial for the participant of the survey. Following a deductive approach ensures a highly structured methodology (Giles and Johnson, 1997) and can also be basis for future research adopting an inductive approach.

1.3 Research Strategy

The research strategy provides a rough picture about how the research question (s) will be answered. It also specifies the sources for data collection and hindrances faced throughout the research like data access limitations, time constraints, economical and ethical issues. Saunders et al. (2003) explain that the strategy is concerned with the overall approach you adopt while the tactics involves the details like data collection methods (questionnaire, interviews published data) and analysis methods. There are several strategies that can be employed and they can be classified based on the approach, deductive or inductive, adopted.

This research adopts deductive approach. Survey strategy is well suited for this approach. A large amount of data was required to determine the relationship, if any, between the constructs defined in the literature review. According to Saunders et al (2003) and Collins and Hussey (2003) surveys allows data collection and can be addressed to a sizeable audience in a very cost-effective way. Surveys are mostly done in the form of questionnaire, as questionnaire provides standardised data making it easy for comparison. One drawback is the time spent to construct and test a questionnaire. In a survey there is a huge dependence on the participants to answer the questionnaire causing unnecessary delays. There is also a limitation on the number of questions that can be included in the questionnaire. This limitation is from the respondents’ perspective if the researcher wants a high quality of response from the participants.

Owing to the nature and amount of size required, statistical analysis of data, time available for the research and for economic reasons the survey strategy has been adopted for this research.

Choice of research method

According to Saunders et al. (2003) the research methods are in accord with the methods and used for data collection and analysis. Quantitative research is associated with numeric data collection and analysis while, ‘qualitative’ methods are inclined towards non-numeric or data that is gained from inference. However a combined approach can also be adopted as suggested by Tashakori and Teddlie’s (2003). The main advantage is that the researcher can get a different perspective while attempting to answer the research questions and also make more reliable interpretations, ’triangulation’ (Saunders et al. 2009).

For this research data was collected via online questionnaire and was statically analysed and represented using graphs. Number crunching methods are generally used in business and management studies. This method is primarily contributed to quantitative analysis. To answer the research question data was also collected from theories and case studies and analysed qualitatively. To present the analysis in a structured manner and articulate the inferences from the theories and statistical analysis could only be done by means of words (Saunders et al., 2009). By making use of qualitative methods the data could be categorized under “knowledge management environment”, “organisational knowledge behaviour” and “Individual knowledge behaviour” and with the aid of narrative an attempt to establish relationships, if any, between them (Saunders et al., 2009, p.516).

1.4 Time Horizons

Saunders et al. (2009) suggest that a research can be depicted in a snap look alike or can have a diary like perspective. A ‘snapshot’ horizon is termed as a cross sectional whereas the diary perspective is termed as longitudinal. Further Saunders et al (2003) suggest that the time perspective to research (cross-sectional or longitudinal) is independent of the research strategy.

Longitudinal research is adopted when change or development that occurs over a period of time is to be studied. Adam and Schvaneveldt (1991) suggest that in longitudinal studies is very useful in studying human behaviours and development. Longitudinal studies do have a limitation when time is a constraint. In cross-sectional research, a certain phenomenon is studied at a particular point in time. This research tries to explore the relationship between organisational environment and its effect on organisational behaviours in the context of Knowledge Management. It is aimed to find the relation at the present time so a cross-sectional study is adopted. According to Easterby-Smith et al. ( 2002) surveys are preferred in cross-sectional studies. However Robson (2002) g=further says that qualitative methods can also be adopted in cross-sectional studies by considering interviews carried out in a short span of time.

1.5 Secondary Data Collection

According to Saunders et al. ( 2003) secondary data includes both quantitative and qualitative data. Secondary data is usually used in the form of case studies or survey-based research in management and business research. Saunders et al. (2003) have classified secondary data under documentary data, complied data and survey-based data as shown in figure 3.1

For this research the primary data collection was using online questionnaires. However documentary secondary data was also unsed in conjunction to the primary data. The purpose of making use of secondary data was to explore the existing literature and explore the various facets of knowledge management. Documentary secondary data like books, journals articles were used in this research to define the three constructs explained in chapter 2. Also secondary data was used to explore the literature to define the research question. Books by noted authors and academic journals such as Emerald journals, swetswise e-journals, ebsco host were refereed for the purpose of data collection.

The reliability and validity of secondary data relates to the methods by which the data was collected and the source of the data. A quick assessment of the source can ensure validity and reliability of the data. Dochartaigh (2002) suggests the testing of reliability and validity refers to testing the authority and reputation of the source. Articles and papers found in Emerald and Ebscohost are likely to be more reliable and trustworthy which can be inferred from the continued existence of such organisations. Dochartaigh (2002) furthers the point of assessment by looking out for copyright statement.

1.6 Research Sample

Saunders et al. (2003) differentiated sampling techniques as probability sampling and non-probability sampling based on their generalizability. Probability sampling meant that the research question could be answered and generalized across the target population, based on the responses from the sample size. Time was a constraint owing to the business of the participants who belonged to Knowledge intensive industry, selecting a sampling method was a challenge. According to Easterby-Smith et al. (2002), sampling methods must reduce the amount of data to be collected by focusing on the target population rather than a random sample population.

Snowball sampling was selected to ensure that maximum participants could be reached. The research was carried within 7 organisations across 5 countries. The researcher could not personally know so many professional from IT and other knowledge intensive industry, so a few managers was contacted who subsequently forwarded the questionnaire to others with in their respective organisation resulting in to a homogeneous sample (Babbie, 2008). Manager also had to be contacted since all participants could not be addressed directly due company policies restricting external emails.

Since the questionnaire was target at the users of knowledge management tools and practices with in the organisation, the researcher requested the managers to forward the questionnaire across the organisation independent of the managerial status. Sample selection was continued till 20 responses from each organisation were received. 140 samples have been considered for this study.

1.7 Primary Data collection

Questionnaire is a form of data collection in which all the respondents are asked the same set of questions in a pre-set order (deVaus, 2002). Robson (2002) suggested that questionnaires are not effective in a descriptive research as it is requires many open ended questions to be answered. All the participants should interpret the questionnaire in the same manner; the data collected can be reliable. If the questionnaire is worded correctly, less effort is required to administer the questionnaire (Jankowicz, 2000). Questionnaire can be classified as shown in the Figure below. The differentiation is based on the level of interaction between the researcher and the respondents.

The research has an international orientation to it. The respondents are based in 5 countries and it was not feasible for the researcher to meet each respondent. So a self-administered questionnaire was the most appropriate option. Time and monetary constraint further helped to narrow down the survey to an online questionnaire where the questionnaire was forwarded to the emails. Email offers a better reliability as the respondents would access their own emails and respond to the questionnaire (Witmer et al., 1999). In this case the questionnaire was sent to the managers who further forwarded the emails to their colleagues. In this scenario online questionnaire was a more feasible option because it is easy to forward emails, unauthorised access to emails would be difficult and the responses would go directly to the researcher without them being disclosed or discussed with.

The questionnaire has been divided into two parts. The first part consists of information regarding demographics such as organisation location, age, tenure in the organisation and job role. The second part consists of questions related to organization’s knowledge management practices, knowledge behaviour and use of the knowledge. The data required for the research required responses from managerial and non-managerial employees working in a knowledge intensive environment. It was required to create an accurate cause – effect relationship of the KM practices with respect the organisational environment and behaviour of employees. This required honest responses about the KM practices.

Appendix shows the questions that were asked to define the relationship amongst the construct defined in the literature review. Likert scale has been used to score each question and score will be given from strongly disagree(1) to strongly agree(5) to. In the questionnaire 1 question has been framed using negation and in a reverse order. Podsakoff et al. (2003) suggest that this should be done to ensure that respondent pay attention while reading the question. All questionnaires were returned within 72 hours. Considering the incentives and time constraints for the respondents the questionnaire was designed so that it does not take more than 8-10 minutes to be answered.

1.8 Dota Analysis methods

Qualitative and quantitative data has been used in this research. Qualitative data has been used to study the literature about knowledge manangement and define the constructs that for the basis of the research question. Quantitiative data was collected primarily with the help of questionnaire.

1.9 Methodological Review

Saunders et al. (2003) emphasis on two aspects of data collection: validity and reliability. The validity and reliability of secondary data has been explained in SECTION. Saunders et al. (2007) suggest that in case of a questionnaire pilot testing should be done to ensure the validity of the question and the reliability of the data subsequently collected. The questionnaire used for the survey has been tested on a group, to test the comprehensibility of the content and the logic of the questions. Bell (1999) suggests that a trail run should never be compromised even if time is a constraint. While testing the questionnaire the respondent were asked regarding the time taken to complete, ambiguity of the questions, if any questions caused a uncomfortable feeling or awkward state of mind and the last was the structure. Validating the questionnaire ensures that the response for each question and the motive for the question are the relevant (Saunders et al. 2000).

Reliability of the questionnaire depends on the consistency of the response to the same questions. To ensure this the questionnaire must be answered twice by the respondent at differing time (Easterby-Smith, et al. 2002). This may be difficult due time constraints but should be done. Mitchell (1996) suggests that the responses of the questions should be checked for consistency within the subgroup. In this research the questionnaire has been divided in to 4 sections. During the pilot testing the responses where checked for consistency with in each section to ensure the reliability. The results can be generalised to an extent due to the sample size and inferences are gathered based on the statistical analysis. Steps have been taken to ensure the anonymous nature of the questionnaire so that the responses are honest and unbiased.

Free Essays

The Role of International Strategy and Organizational Design

The current trend of world economics business model lays on the world system division of labor between the core, the marginal, and the semi-marginal countries/states. The trading is not “isolated” or “internal” but rather it participates externally or in the global market and as such, this type of market is heavily affected by the dictates of the globalization trends.

The system of economics and their flow and relations between these countries are “non-static” and “non-constant” over long and short periods of time due largely to political, environmental and cultural changes vis á vis the evolving idea of ‘consumerism’ in the global community. The traditional concept of consumerism and commoditization of goods is largely challenged, hence, the business sectors/producers should construct an effective strategy and an efficient organizational design to cope up with the world economic trend and at the same, fulfill the organization/companies objectives and visionary goals.

The success, therefore, of an international company, lies on competitive action central to the combination of an effective strategic and traditional management.  We do not displace the idea of traditional management (e.g. budgeting and marketing) because its’ function is recognized as the core of business planning but rather, we aim to rectify/improve the company’s/organization’s business performance by target shooting the errors and analyzing it within the context of the global market system (or the business environment) and the capabilities (e.g. assets, facilities, resources) of the company system.

Critical to strategic management is the anticipation of changes in the economic system, in the demands of the consumers, new business technologies, competition, and (global) economic policy developments. Co-integration of the two—traditional and strategic—would give a sense of direction to the company in the globally competitive market.

What would be an apt strategic management in the non-static global economic system? The strategic management for this is a six level schema: (1) analysis of external factors (2) scrutinizing internal factors (3) stratagem (4) execution and (5) performance assessment/evaluation.

Arguably, the logic in analyzing the external factors lies in the structural level of social formation, but, we dispense this, in favor of the transnational concept—an approach that capitalizes on the importance of transnational practices in three major sectors, political, economic, cultural with focus on transnational corporation influence and consumerism— that of which had been the latest trend in global capitalism. Also, the importance of technological improvements and their incorporation into the market is intangible in the analysis of external factors.

The presence of competitors and economic policies should not be undermined; the parameters set by international laws may be restrictive but nevertheless, they are designed to facilitate a “fair” trading system; competitors for a particular commodity should also be accounted since globalization is heavily mandated by the transnational corps. It is on the basis of such external factors the company will seek to adjust to and construct the stratagem.

The capacity of the organization, its’ parameters, its’ resources, its’ liabilities and its’ needs must be carefully examined. Financial status, the employed technology for the commodity, the operative management and the available facilities must be ‘apt’ and can be competitive with the international companies. Leadership within the system and good working force are important elements. The organization should seek to answer the following in response to its internal structure: is the product globally competitive?

After assessing the internal and external factors, devising the stratagem is the next point of economic action. Goal identification and the feasibility of the plan being constructed is high on the agenda. Crucial to this is the statistics of materialization, the impact on the company/organization, and products development over a timeframe. Critical points should be well identified as well as mitigating errors, alternative plans, and analyzing and defining jobs and responsibilities per level of organization. The stratagem developed should have the following characteristics: (1) goal-oriented; (2) creative, by-product of external and internal analysis; and (3) strength-decisive/non-vulnerable in the market; (4) feasible.

The execution of strategy requires organizational design, resource allocation, and strong motivation. Organizational design involves efficient distribution of work force, recognizing their potential, and creating effective relations between the working people. Performance assessment is the last step and is achieved by assessing the plan on its’ efficiency on its’ how’s and ends. Flow monitoring of the work and assessing statistical significance of produce as well as company growth are important evaluation points. The importance of such strategy is the actual/real test of the stratagem on the economic market.

The strategic management places special attention to the environmental monitoring. Such activity is inherently important during forecasting or anticipation of future economic events and other related global aspects which may otherwise affect the position of organization and its products in the global economic scheme. Present and past trends and their change over time is prevalent in predicting scenarios that may be of value to the company. In strategic planning, ‘predictions’ are important in that the decisions are made to be flexible.

In recognizing the role of international strategy and organizational design in the global market, the organization/company takes an initial step in ‘equipping’ itself against the highly volatile network of economic world systems and becomes, at the most, competitive.


Sklair, L. (1999). Competing Conceptions of Globalization. Journal of World-Systems Research, 5, 143-162.

Aguilar, F. (1967). Scanning the business environment. NY: Macmillan, Inc.





Free Essays

System Analysis and Design

Assignment: 1. Describe three traditional techniques for collecting information during analysis. When might one be better than another? 2. What are the general guidelines for collecting data through observing workers? 3. What is the degree of a relationship? Give an example of each of the relationship degrees illustrated in this chapter. Please make sure the assignment follows APA FORMAT. Also the citation and the references are two important factors of getting good grade for the assignment. Again the deadline for this assignment is by Thursday no later than 11:59 PM IST.

For any questions and concerns please do not hesitate to contact me. Chapter Objectives After studying this chapter, you should be able to: ¦Concisely define each of the following key data-modeling terms: conceptual data model, entity-relationship diagram, entity type, entity instance, attribute, candidate key, multivalued attribute, relationship, degree, cardinality, and associative entity. ¦Ask the right kinds of questions to determine data requirements for an information system. ¦Draw an entity-relationship (E-R) diagram to represent common business situations. Explain the role of conceptual data modeling in the overall analysis and design of an information system. ¦Distinguish between unary, binary and ternary relationships, and give an example of each. ¦Distinguish between a relationship and an associative entity, and use associative entities in a data model when appropriate. ¦Relate data modeling to process and logic modeling as different ways of describing an information system. ¦Generate at least three alternative design strategies for an information system. ¦Select the best design strategy using both qualitative and quantitative methods. Chapter Preview …

In Chapter 6 you learned how to model and analyze the flow of data (data in motion) between manual or automated steps and how to show data stores (data at rest) in a data-flow diagram. Data-flow diagrams show how, where, and when data are used or changed in an information system, but they do not show the definition, structure, and relationships within the data. Data modeling, the subject of this chapter, develops this missing, and crucial, piece of the description of an information system. Systems analysts perform data modeling during the systems analysis phase, as highlighted in Figure 7-1.

Data modeling is typically done at the same time as other requirements structuring steps. Many systems developers believe that a data model is the most important part of the information system requirements statement for four reasons. First, the characteristics of data captured during data modeling are crucial in the design of databases, programs, computer screens, and printed reports. For example, facts such as these—a data element is numeric, a product can be in only one product line at a time, a line item on a customer order can never be moved to another customer order—are all essential in ensuring an information system’s data integrity.

FIGURE 7-1 Systems analysts perform data modeling during the systems analysis phase. Data modeling typically occurs in parallel with other requirements structuring steps. Second, data rather than processes are the most complex aspects of many modern information systems. For example, transaction processing systems can have considerable complexity in validating data, reconciling errors, and coordinating the movement of data to various databases.

Management information systems (such as sales tracking), decision support systems (such as short-term cash investment), and executive support systems (such as product planning) are data intensive and require extracting data from various data sources. Third, the characteristics about data (such as format and relationships with other data) are rather permanent. In contrast, who receives which data, the format of reports, and what reports are used change constantly over time. A data model explains the inherent nature of the organization, not its transient form.

So, an information system design based on data, rather than processes or logic, should have a longer useful life. Finally, structural information about data is essential to generate programs automatically. For example, the fact that a customer order has many line items as opposed to just one affects the automatic design of a computer form in Microsoft Access for entry of customer orders. In this chapter, we discuss the key concepts of data modeling, including the most common format used for data modeling—entity-relationship (E-R) diagramming.

During the systems analysis phase of the SDLC, you use data-flow diagrams to show data in motion and E-R diagrams to show the relationships among data objects. We also illustrate E-R diagrams drawn using Microsoft’s Visio tool, highlighting this tool’s capabilities and limitations. You have now reached the point in the analysis phase where you are ready to transform all of the information you have gathered and structured into some concrete ideas about the design for the new or replacement information system. This aspect is called the design strategy. From requirements determination, you know what the current system does.

You also know what the users would like the replacement system to do. From requirements structuring, you know what forms the replacement system’s process flow and data should take, at a logical level independent of any physical implementation. To bring analysis to a conclusion, your job is to take these structured requirements and transform them into several alternative design strategies. One of these strategies will be pursued in the design phase of the life cycle. In this chapter, you learn why you need to come up with alternative design strategies and about guidelines for generating alternatives.

You then learn the different issues that must be addressed for each alternative. Once you have generated your alternatives, you will have to choose the best design strategy to pursue. We include a discussion of one technique that analysts and users often use to help them agree on the best approach for the new information system. Conceptual Data Modeling A conceptual data model is a representation of organizational data. The purpose of a conceptual data model is to show as many rules about the meaning and interrelationships among data as possible.

Conceptual data model A detailed model that shows the overall structure of organizational data while being independent of any database management system or other implementation considerations. Entity-relationship (E-R) data models are commonly used diagrams that show how data are organized in an information system. The main goal of conceptual data modeling is to create accurate E-R diagrams. As a systems analyst, you typically do conceptual data modeling at the same time as other requirements analysis and structuring steps during systems analysis.

You can use methods such as interviewing, questionnaires, and JAD sessions to collect information for conceptual data modeling. On larger systems development teams, a subset of the project team concentrates on data modeling while other team members focus attention on process or logic modeling. You develop (or use from prior systems development) a conceptual data model for the current system and build a conceptual data model that supports the scope and requirements for the proposed or enhanced system. The work of all team members is coordinated and shared through the project dictionary or repository.

As discussed in Chapter 3, this repository and associated diagrams may be maintained by a CASE tool or a specialized tool such as Microsoft’s Visio. Whether automated or manual, the process flow, decision logic, and data-model descriptions of a system must be consistent and complete, because each describes different but complementary views of the same information system. For example, the names of data stores on primitive-level DFDs often correspond to the names of data entities in entity-relationship diagrams, and the data elements in data flows on DFDs must be attributes of entities and relationships in entity-relationship diagrams.

The Process of Conceptual Data Modeling You typically begin conceptual data modeling by developing a data model for the system being replaced, if a system exists. This phase is essential for planning the conversion of the current files or database into the database of the new system. Further, it is a good, but not a perfect, starting point for your understanding of the new system’s data requirements. Then, you build a new conceptual data model that includes all of the data requirements for the new system. You discovered these requirements from the fact-finding methods used during requirements determination.

Today, given the popularity of prototyping and other rapid development methodologies, these requirements often evolve through various iterations of a prototype, so the data model is constantly changing. Conceptual data modeling is only one kind of data modeling and database design activity done throughout the systems development process. Figure 7-2 shows the different kinds of data modeling and database design that occur during the systems development life cycle. The conceptual data-modeling methods we discuss in this chapter are suitable for various tasks in the planning and analysis phases.

These phases of the SDLC address issues of system scope, general requirements, and content. An E-R data model evolves from project identification and selection through analysis as it becomes more specific and is validated by more detailed analysis of system needs. In the design phase, the final E-R model developed in analysis is matched with designs for systems inputs and outputs and is translated into a format that enables physical data storage decisions. During physical design, specific data storage architectures are selected, and then, in implementation, files and databases are defined as the system is coded.

Through the use of the project repository, a field in a physical data record can, for example, be traced back to the conceptual data attribute that represents it on an E-R diagram. Thus, the data modeling and design steps in each of the SDLC phases are linked through the project repository. Deliverables and Outcomes Most organizations today do conceptual data modeling using entity-relationship modeling, which uses a special notation of rectangles, diamonds, and lines to represent as much meaning about data as possible.

Thus, the primary deliverable from the conceptual data-modeling step within the analysis phase is an entity-relationship (E-R) diagram. A sample E-R diagram appears in Figure 7-3(A). This figure shows the major categories of data (rectangles in the diagram) and the business relationships between them (lines connecting rectangles). For example, Figure 7-3(A) describes that, for the business represented, a SUPPLIER sometimes supplies ITEMs to the company, and an ITEM is always supplied by one to four SUPPLIERS.

The fact that a supplier only sometimes supplies items implies that the business wants to keep track of some suppliers without designating what they can supply. This diagram includes two names on each line, giving you explicit language to read a relationship in each direction. For simplicity, we will not typically include two names on lines in E-R diagrams in this book; however, many organizations use this standard. FIGURE 7-2 Relationship between Data Modeling and the Systems Development Life Cycle It is common that E-R diagrams are developed using CASE tools or other smart drawing packages.

These tools provide functions to facilitate consistency of data models across different systems development phases, reverse engineering an existing database definition into an E-R diagram, and provide documentation of objects on a diagram. One popular tool is Microsoft Visio. Figure 7-3(B) shows the equivalent of Figure 7-3(A) using Visio. This diagram is developed using the Database Model Diagram tool. The Database|Options|Document settings are specified as relational symbol set, conceptual names on the diagram, option-ality is shown, and relationships are shown using the crow’s foot notation with forward and inverse relationship names.

These settings cause Visio to draw an E-R diagram that most closely resembles the standards used in this text. FIGURE 7-3 Sample Conceptual Data Model Diagrams (A) Standard E-R Notation Some key differences distinguish the standard E-R notation illustrated in Figure 7-3(A) from the notation used in Visio, including: ¦Relationships such as Supplies/Supplied by between SUPPLIER and ITEM in Figure 7-3(A) require an intermediate category of data (called SUPPLIED ITEM in Figure 7-3(B)) because Visio does not support representing these so-called many-to-many relationships. Relationships may be named in both directions, but these names appear on a text box on the relationship line, separated by a forward slash. ¦Limitations, such as an ITEM is always supplied by at most four SUPPLIERS, are not shown on the diagram but rather are documented in the Miscellaneous set of Database Properties of the relationship, which are part of Visio’s version of a CASE repository. ¦The symbol for each category of data (e. g. SHIPMENT) includes space for listing other properties of each data category (such as all the attributes or columns of data we know about that data category); we will illustrate these components later in this chapter. We concentrate on the traditional E-R diagramming notation in this chapter; however, we will include the equivalent Visio version on several occasions so you can see how to show data-modeling concepts in this popular database design tool. As many as four E-R diagrams may be produced and analyzed during conceptual data modeling: 1. An E-R diagram that covers just the data needed in the project’s application. This first diagram allows you to concentrate on the data requirements without being constrained or confused by unnecessary details. ) FIGURE 7-3 Sample Conceptual Data Model Diagrams (B) Visio E-R Notation 2. An E-R diagram for the application system being replaced. (Differences between this diagram and the first show what changes you have to make to convert databases to the new application. ) This version is, of course, not produced if the proposed system supports a completely new business function. 3. An E-R diagram for the whole database from which the new application’s data are extracted. Because many applications share the same database or even several databases, this and the first diagram show how the new application shares the contents of more widely used databases. ) 4. An E-R diagram for the whole database from which data for the application system being replaced is drawn. (Again, differences between this diagram and the third show what global database changes you have to make to implement the new application. ) Even if no system is being replaced, an understanding of the existing data systems is necessary to see where the new data will fit in or if existing data structures must change to accommodate new data.

The other deliverable from conceptual data modeling is a set of entries about data objects to be stored in the project dictionary or repository. The repository is the mechanism to link data, process, and logic models of an information system. For example, explicit links can be shown between a data model and a data-flow diagram. Some important links are briefly explained here. ¦Data elements included in data flows also appear in the data model, and vice versa. You must include in the data model any raw data captured and retained in a data store.

The data model can include only data that have been captured or are computed from captured data. Because a data model is a general business picture of data, both manual and automated data stores will be included. ¦Each data store in a process model must relate to business objects (what we call data entities) represented in the data model. For example, in Figure 6-5, the Inventory File data store must correspond to one or several data objects on a data model. Gathering Information for Conceptual Data Modeling Requirements determination methods must include questions and investigations hat take a data focus rather than only a process and logic focus. For example, during interviews with potential system users, you must ask specific questions to gain the perspective on data needed to develop a data model. In later sections of this chapter, we introduce some specific terminology and constructs used in data modeling. Even without this specific data-modeling language, you can begin to understand the kinds of questions that must be answered during requirements determination. These questions relate to understanding the rules and policies by which the area supported by the new information system operates.

That is, a data model explains what the organization does and what rules govern how work is performed in the organization. You do not, however, need to know how or when data are processed or used to do data modeling. You typically do data modeling from a combination of perspectives. The first perspective is called the top-down approach. It derives the data model from an intimate understanding of the nature of the business, rather than from any specific information requirements in computer displays, reports, or business forms.

Table 7-1 summarizes key questions to ask system users and business managers so that you can develop an accurate and complete data model. The questions are purposely posed in business terms. Of course, technical terms do not mean much to a business manager, so you must learn how to frame your questions in business terms. Alternatively, you can gather the information for data modeling by reviewing specific business documents—computer displays, reports, and business forms—handled within the system. This second perspective of gaining an understanding of data is often called a bottom-up approach.

These business documents will appear as data flows on DFDs and will show the data processed by the system, which probably are the data that must be maintained in the system’s database. Consider, for example, Figure 7-4, which shows a customer order form used at Pine Valley Furniture. From the form in Figure 7-4, we determine that the following data must be kept in the database: ORDER NO ORDER DATE PROMISED DATE PRODUCT NO DESCRIPTION QUANTITY ORDERED UNIT PRICE CUSTOMER NO NAME ADDRESS CITY-STATE-ZIP TABLE 7-1: Questions to Ask to Develop Accurate and Complete Data Models Category of Questions

Questions to Ask System Users and Business Managers 1. Data entities and their descriptions What are the subjects/objects of the business? What types of people, places, things, and materials are used or interact in this business about which data must be maintained? How many instances of each object might exist? 2. Candidate key What unique characteristic(s) distinguishes each object from other objects of the same type? Could this distinguishing feature change over time or is it permanent? Could this characteristic of an object be missing even though we know the object exists? 3. Attributes and secondary keys

What characteristic describes each object? On what basis are objects referenced, selected, qualified, sorted, and categorized? What must we know about each object in order to run the business? 4. Security controls and understanding who really knows the meaning of data How do you use these data? That is, are you the source of the data for the organization, do you refer to the data, do you modify them, and do you destroy them? Who is not permitted to use these data? Who is responsible for establishing legitimate values for these data? 5. Cardinality and time dimensions of data Over what period of time are you interested in these data?

Do you need historical trends, current “snapshot” values, and/or estimates or projections? If a characteristic of an object changes over time, must you know the obsolete values? 6. Relationships and their cardinality and degrees What events occur that imply associations between various objects? What natural activities or transactions of the business involve handling data about several objects of the same or different type? 7. Integrity rules, minimum and maximum cardinality, time dimensions of data Is each activity or event always handled the same way, or are there special circumstances?

Can an event occur with only some of the associated objects, or must all objects be involved? Can the associations between objects change over time (e. g. , employees change departments)? Are values for data characteristics limited in any way? FIGURE 7-4 Customer Order Form Used at Pine Valley Furniture We also see that each order is from one customer, and an order can have multiple line items, each for one product. We use this kind of understanding of an organization’s operation to develop data models. WWW NET SEARCH Investigate the origins and variations of the entity-relationship notation.

Visit http://www. pearsonhighered. com/valacich to complete an exercise related to this topic. Introduction to Entity-Relationship Modeling The basic entity-relationship modeling notation uses three main constructs: data entities, relationships, and their associated attributes. Several different E-R notations exist, and many CASE tools support multiple notations. For simplicity, we have adopted one common notation for this book, the so-called crow’s foot notation. If you use another notation in courses or work, you should be able to easily translate between notations.

An entity-relationship diagram (or E-R diagram) is a detailed, logical, and graphical representation of the data for an organization or business area. The E-R diagram is a model of entities in the business environment, the relationships or associations among those entities, and the attributes or properties of both the entities and their relationships. A rectangle is used to represent an entity, and lines are used to represent the relationship between two or more entities. The notation for E-R diagrams appears in Figure 7-5. Entity-relationship diagram (E-R diagram) A detailed, logical, and graphical representation of the entities, ssociations, and data elements for an organization or business area. Entities FIGURE 7-5 Entity-Relationship Diagram Notations: Basic Symbols, Relationship Degree, and Relationship Cardinality Entity A person, place, object, event, or concept in the user environment about which the organization wishes to maintain data. An entity is a person, place, object, event, or concept in the user environment about which the organization wishes to maintain data. As noted in Table 7-1, the first requirements determination question an analyst should ask concerns data entities. An entity has its own identity, which distinguishes it from every other entity.

Some examples of entities follow: ¦Person: EMPLOYEE, STUDENT, PATIENT ¦Place: STATE, REGION, COUNTRY, BRANCH ¦Object: MACHINE, BUILDING, AUTOMOBILE, PRODUCT ¦Event: SALE, REGISTRATION, RENEWAL ¦Concept: ACCOUNT, COURSE, WORK CENTER You need to recognize an important distinction between entity types and entity instances. An entity type is a collection of entities that share common properties or characteristics. Each entity type in an E-R model is given a name. Because the name represents a set of entities, it is singular. Also, because an entity is an object, we use a simple noun to name an entity type.

We use capital letters in naming an entity type, and in an E-R diagram, the name is placed inside a rectangle representing the entity, for example: Entity type A collection of entities that share common properties or characteristics. An entity instance(or instance) is a single occurrence of an entity type. An entity type is described just once in a data model, whereas many instances of that entity type may be represented by data stored in the database. For example, most organizations have one EMPLOYEE entity type, but hundreds (or even thousands) of instances of this entity type may be stored in the atabase. Entity instance (instance) A single occurrence of an entity type. A common mistake made in learning to draw E-R diagrams, especially if you already know how to do data-flow diagramming, is to confuse data entities with sources/sinks, system outputs, or system users, and to confuse relationships with data flows. A simple rule to avoid such confusion is that a true data entity will have many possible instances, each with a distinguishing characteristic, as well as one or more other descriptive pieces of data. Consider the following entity types that might be associated with a church expense system:

In this situation, the church treasurer manages accounts and records expense transactions against each account. However, do we need to keep track of data about the treasurer and her supervision of accounts as part of this accounting system? The treasurer is the person entering data about accounts and expenses and making inquiries about account balances and expense transactions by category. Because the system includes only one treasurer, TREASURER data do not need to be kept. On the other hand, if each account has an account manager (e. g. a church committee chair) who is responsible for assigned accounts, then we may wish to have an ACCOUNT MANAGER entity type, with pertinent attributes as well as relationships to other entity types. In this same situation, is an expense report an entity type? Because an expense report is computed from expense transactions and account balances, it is a data flow, not an entity type. Even though multiple instances of expense reports will occur over time, the report contents are already represented by the ACCOUNT and EXPENSE entity types. Often when we refer to entity types in subsequent sections, we simply say entity.

This shorthand reference is common among data modelers. We will clarify that we mean an entity by using the term entity instance. Attributes Each entity type has a set of attributes associated with it. An attribute is a property or characteristic of an entity that is of interest to the organization (relationships may also have attributes, as we see in the section on relationships). Asking about attributes is the third question noted in Table 7-1 (see page 212). Following are some typical entity types and associated attributes: STUDENT: Student_ID, Student_Name, Address, Phone_Number, Major

AUTOMOBILE: Vehicle_ID, Color, Weight, Horsepower EMPLOYEE: Employee_ID, Employee_Name, Address, Skill Attribute A named property or characteristic of an entity that is of interest to the organization. We use nouns with an initial capital letter followed by lowercase letters in naming an attribute. In E-R diagrams, we represent an attribute by placing its name inside the rectangle that represents the associated entity. In many E-R drawing tools, such as Microsoft Visio, attributes are listed within the entity rectangle under the entity name.

Candidate Keys and Identifiers Every entity type must have an attribute or set of attributes that distinguishes one instance from other instances of the same type. A candidate key is an attribute (or combination of attributes) that uniquely identifies each instance of an entity type. A candidate key for a STUDENT entity type might be Student_ID. Candidate key An attribute (or combination of attributes) that uniquely identifies each instance of an entity type. Sometimes more than one attribute is required to identify a unique entity.

For example, consider the entity type GAME for a basketball league. The attribute Team_Name is clearly not a candidate key, because each team plays several games. If each team plays exactly one home game against every other team, then the combination of the attributes Home_Team and Visiting_Team is a candidate key for GAME. Some entities may have more than one candidate key. One candidate key for EMPLOYEE is Employee_ID; a second is the combination of Employee_Name and Address (assuming that no two employees with the same name live at the same address).

If more than one candidate key is involved, the designer must choose one of the candidate keys as the identifier. An identifier is a candidate key that has been selected to be used as the unique characteristic for an entity type. Identifier A candidate key that has been selected as the unique, identifying characteristic for an entity type. Identifiers should be selected carefully because they are critical for the integrity of data. You should apply the following identifier selection rules: 1. Choose a candidate key that will not change its value over the life of each instance of the entity type.

For example, the combination of Employee_Name and Address would probably be a poor choice as a primary key for EMPLOYEE because the values of Employee_Name and Address could easily change during an employee’s term of employment. 2. Choose a candidate key such that, for each instance of the entity, the attribute is guaranteed to have valid values and not be null. To ensure valid values, you may have to include special controls in data entry and maintenance routines to eliminate the possibility of errors. If the candidate key is a combination of two or more attributes, make sure that all parts of the key have valid values. . Avoid the use of so-called intelligent keys, whose structure indicates classifications, locations, and other entity properties. For example, the first two digits of a key for a PART entity may indicate the warehouse location. Such codes are often modified as conditions change, which renders the primary key values invalid. 4. Consider substituting single-attribute surrogate keys for large composite keys. For example, an attribute called Game_ID could be used for the entity GAME instead of the combination of Home_Team and Visiting_Team. For each entity, the name of the identifier is underlined on an E-R diagram.

The following diagram shows the representation for a STUDENT entity type using E-R notation: The equivalent representation using Microsoft Visio is the following: In the Visio notation, the primary key is listed immediately below the entity name with the notation PK, and the primary key is underlined. All required attributes (that is, an instance of STUDENT must have values for Student_ID and Name) are in bold. Multivalued Attributes A multivalued attribute may take on more than one value for each entity instance. Suppose that, Skill is one of the attributes of EMPLOYEE.

If each employee can have more than one Skill, then it is a multivalued attribute. During conceptual design, two common special symbols or notations are used to highlight multivalued attributes. The first is to use curly brackets around the name of the multivalued attribute, so that the EMPLOYEE entity with its attributes is diagrammed as follows: Multivalued attribute An attribute that may take on more than one value for each entity instance. Many E-R drawing tools, such as Microsoft Visio, do not support multivalued attributes within an entity.

Thus, a second approach is to separate the repeating data into another entity, called a weak(or attributive) entity, and then using a relationship (relationships are discussed in the next section), link the weak entity to its associated regular entity. The approach also easily handles several attributes that repeat together, called a repeating group. Consider an EMPLOYEE and his or her dependents. Dependent name, age, and relation to employee (spouse, child, parent, etc. ) are multivalued attributes about an employee, and these attributes repeat together.

We can show this repetition using an attributive entity, DEPENDENT, and a relationship, shown here simply by a line between DEPENDENT and EMPLOYEE. The crow’s foot next to DEPENDENT means that many DEPENDENTs may be associated with the same EMPLOYEE. Repeating group A set of two or more multivalued attributes that are logically related. Relationships Relationships are the glue that hold together the various components of an E-R model. In Table 7-1 (see page 212), questions 5, 6, and 7 deal with relationships. A relationship is an association between the instances of one or more entity types that are of interest to the organization.

An association usually means that an event has occurred or that some natural linkage exists between entity instances. For this reason, relationships are labeled with verb phrases. For example, a training department in a company is interested in tracking which training courses each of its employees has completed. This information leads to a relationship (called Completes) between the EMPLOYEE and COURSE entity types that we diagram as follows: Relationship An association between the instances of one or more entity types that is of interest to the organization.

As indicated by the lines, this relationship is considered a many-to-many relationship: Each employee may complete more than one course, and each course may be completed by more than one employee. More significantly, we can use the Completes relationship to determine the specific courses that a given employee has completed. Conversely, we can determine the identity of each employee who has completed a particular course. Conceptual Data Modeling and the E-R Model The last section introduced the fundamentals of the E-R data modeling notation—entities, attributes, and relationships.

The goal of conceptual data modeling is to capture as much of the meaning of data as possible. The more details (or what some systems analysts call business rules) about data that we can model, the better the system we can design and build. Further, if we can include all these details in an automated repository, such as a CASE tool, and if a CASE tool can generate code for data definitions and programs, then the more we know about data, the more code can be generated automatically, making the system building more accurate and faster.

More importantly, if we can keep a thorough repository of data descriptions, we can regenerate the system as needed as the business rules change. Because maintenance is the largest expense with any information system, the efficiencies gained by maintaining systems at the rule, rather than code, level drastically reduce the cost. In this section, we explore more advanced concepts needed to more thoroughly model data and learn how the E-R notation represents these concepts. WWW NET SEARCH Investigate the entity-relationship diagramming capabilities of several CASE tools. Visit http://www. pearsonhighered. om/valacich to complete an exercise related to this topic. Degree of a Relationship The degree of a relationship, question 6 in Table 7-1, is the number of entity types that participate in that relationship. Thus, the relationship Completes, illustrated previously, is of degree two because it involves two entity types: EMPLOYEE and COURSE. The three most common relationships in E-R diagrams are unary (degree one), binary (degree two), and ternary (degree three). Higher-degree relationships are possible, but they are rarely encountered in practice, so we restrict our discussion to these three cases.

Examples of unary, binary, and ternary relationships appear in Figure 7-6. Degree The number of entity types that participate in a relationship. Unary Relationship Unary relationship (recursive relationship) A relationship between the instances of one entity type. FIGURE 7-6 Examples of the Three Most Common Relationships in E-R Diagrams: Unary, Binary, and Ternary Also called a recursive relationship, a unary relationship is a relationship between the instances of one entity type. Two examples are shown in Figure 7-6. In the first example, Is_married_to is shown as a one-to-one relationship between instances of the PERSON entity type.

That is, each person may be currently married to one other person. In the second example, Manages is shown as a one-to-many relationship between instances of the EMPLOYEE entity type. Using this relationship, we could identify (for example) the employees who report to a particular manager or, reading the Manages relationship in the opposite direction, who the manager is for a given employee. Binary Relationship A binary relationship is a relationship between instances of two entity types and is the most common type of relationship encountered in data modeling. Figure 7-6 shows three examples.

The first (one-to-one) indicates that an employee is assigned one parking place, and each parking place is assigned to one employee. The second (one-to-many) indicates that a product line may contain several products, and each product belongs to only one product line. The third (many-to-many) shows that a student may register for more than one course and that each course may have many student registrants. Binary relationship A relationship between instances of two entity types. Ternary Relationship A ternary relationship is a simultaneous relationship among instances of three entity types.

In the example shown in Figure 7-6, the relationship Supplies tracks the quantity of a given part that is shipped by a particular vendor to a selected warehouse. Each entity may be a one or a many participant in a ternary relationship (in Figure 7-6, all three entities are many participants). Ternary relationship A simultaneous relationship among instances of three entity types. Note that a ternary relationship is not the same as three binary relationships. For example, Unit_Cost is an attribute of the Supplies relationship in Figure 7-6.

Unit_Cost cannot be properly associated with any of the three possible binary relationships among the three entity types (such as that between PART and VENDOR) because Unit_Cost is the cost of a particular PART shipped from a particular VENDOR to a particular WAREHOUSE. Cardinalities in Relationships Suppose that two entity types, A and B, are connected by a relationship. The cardinality of a relationship (see the fifth, sixth, and seventh questions in Table 7-1) is the number of instances of entity B that can (or must) be associated with each instance of entity A.

For example, consider the following relationship for DVDs and movies: Cardinality The number of instances of entity B that can (or must) be associated with each instance of entity A. Clearly, a video store may stock more than one DVD of a given movie. In the terminology we have used so far, this example is intuitively a “many” relationship. Yet, it is also true that the store may not have a single DVD of a particular movie in stock. We need a more precise notation to indicate the range of cardinalities for a relationship. This notation of relationship cardinality was introduced in Figure 7-5, which you may want to review at this point.

Minimum and Maximum Cardinalities WWW NET SEARCH Investigate the concept of business rules (cardinality is one type of business rule). Visit http://www. pearsonhighered. com/valacich to complete an exercise related to this topic. The minimum cardinality of a relationship is the minimum number of instances of entity B that may be associated with each instance of entity A. In the preceding example, the minimum number of DVDs available for a movie is zero, in which case we say that DVD is an optional participant in the Is_stocked_as relationship.

When the minimum cardinality of a relationship is one, then we say entity B is a mandatory participant in the relationship. The maximum cardinality is the maximum number of instances. For our example, this maximum is “many” (an unspecified number greater than one). Using the notation from Figure 7-5, we diagram this relationship as follows: The zero through the line near the DVD entity means a minimum cardinality of zero, whereas the crow’s foot notation means a “many” maximum cardinality. It is possible for the maximum cardinality to be a fixed number, not an arbitrary “many” value.

For example, see the Supplies relationship in Figure 7-3(A), which indicates that each item involves at most four suppliers. Associative Entities As seen in the examples of the Supplies ternary relationship in Figure 7-6, attributes may be associated with a many-to-many relationship as well as with an entity. For example, suppose that the organization wishes to record the date (month and year) when an employee completes each course. Some sample data follow: EmployeeJD Course_Name Date_Completed 549-23-1948 Basic Algebra March 2009 29-16-8407 Software Quality June 2009 816-30-0458 Software Quality Feb 2009 549-23-1948 C Programming May 2009 From this limited data, you can conclude that the attribute Date_Completed is not a property of the entity EMPLOYEE (because a given employee, such as 549-23-1948, has completed courses on different dates). Nor is Date_Completed a property of COURSE, because a particular course (such as Software Quality) may be completed on different dates. Instead, Date_Completed is a property of the relationship between EMPLOYEE and COURSE.

The attribute is associated with the relationship and diagrammed as follows: Associative entity An entity type that associates the instances of one or more entity types and contains attributes that are peculiar to the relationship between those entity instances. Because many-to-many and one-to-one relationships may have associated attributes, the E-R diagram poses an interesting dilemma: Is a many-to-many relationship actually an entity in disguise? Often the distinction between entity and relationship is simply a matter of how you view the data.

An associative entity is a relationship that the data modeler chooses to model as an entity type. Figure 7-7 shows the E-R notation for representing the Completes relationship as an associative entity. The lines from CERTIFICATE to the two entities are not two separate binary relationships, so they do not have labels. Note that EMPLOYEE and COURSE have mandatory one cardinality, because an instance of Completes must have an associated EMPLOYEE and COURSE. The implicit identifier of Completes is the combination of the identifiers of EMPLOYEE and COURSE, Employee_ID, and Course_ID, respectively.

The explicit identifier is Certificate_Number, as shown in Figure 7-7. FIGURE 7-7 Example of an Associative Entity E-R drawing tools that do not support many-to-many relationships require that any such relationship be converted into an associative entity, whether it has attributes or not. You have already seen an example of this in Figure 7-3 for Microsoft Visio, in which the Supplies/Is supplied by relationship from Figure 7-3(A) was converted in Figure 7-3(B) into the SUPPLIED ITEM entity (actually, associative entity) and two mandatory one-to-many relationships.

One situation in which a relationship must be turned into an associative entity is when the associative entity has other relationships with entities besides the relationship that caused its creation. For example, consider the E-R model, which represents price quotes from different vendors for purchased parts stocked by Pine Valley Furniture, shown in Figure 7-8(A). Now, suppose that we also need to know which price quote is in effect for each part shipment received. This additional data requirement necessitates that the relationship between VENDOR and PART be transformed into an associative entity.

This new relationship is represented in Figure 7-8(B). FIGURE 7-8 An E-R Model That Represents Each Price Quote for Each Part Shipment Received by Pine Valley Furniture In this case, PRICE QUOTE is not a ternary relationship. Rather, PRICE QUOTE is a binary many-to-many relationship (associative entity) between VENDOR and PART. In addition, each PART RECEIPT, based on Amount, has an applicable, negotiated Price. Each PART RECEIPT is for a given PART from a specific VENDOR, and the Amount of the receipt dictates the purchase price in effect by matching with the Quantity attribute.

Because the PRICE QUOTE pertains to a given PART and given VENDOR, PART RECEIPT does not need direct relationships with these entities. An Example of Conceptual Data Modeling at Hoosier Burger Chapter 6 structured the process and data-flow requirements for a food ordering system for Hoosier Burger. Figure 7-9 describes requirements for a new system using Microsoft Visio. The purpose of this system is to monitor and report changes in raw material inventory levels and to issue material orders and payments to suppliers. Thus, the central data entity for this system will be an INVENTORY ITEM, shown in Figure 7-10, corresponding to ata store D1 in Figure 7-9. FIGURE 7-9 Level-0 Data-Flow Diagram for Hoosier Burger’s New Logical Inventory Control System Changes in inventory levels are due to two types of transactions: receipt of new items from suppliers and consumption of items from sales of products. Inventory is added upon receipt of new raw materials, for which Hoosier Burger receives a supplier INVOICE (see Process 1. 0 in Figure 7-9). Figure 7-10 shows that each INVOICE indicates that the supplier has sent a specific quantity of one or more INVOICE ITEMs, which correspond to Hoosier’s INVENTORY ITEMs.

Inventory is used when customers order and pay for PRODUCTs. That is, Hoosier makes a SALE for one or more ITEM SALEs, each of which corresponds to a food PRODUCT. Because the real-time customer-order processing system is separate from the inventory control system, a source, STOCK-ON-HAND in Figure 7-9, represents how data flow from the order processing to the inventory control system. Finally, because food PRODUCTs are made up of various INVENTORY ITEMs (and vice versa), Hoosier maintains a RECIPE to indicate how much of each INVENTORY ITEM goes into making one PRODUCT.

From this discussion, we have identified the data entities required in a data model for the new Hoosier Burger inventory control system: INVENTORY ITEM, INVOICE, INVOICE ITEM, PRODUCT, SALE, ITEM SALE, and RECIPE . To complete the E-R diagram, we must determine necessary relationships among these entities as well as attributes for each entity. FIGURE 7-10 Preliminary E-R Diagram for Hoosier Burger’s Inventory Control System The wording in the previous description tells us much of what we need to know to determine relationships: ¦An INVOICE includes one or more INVOICE ITEMs, each of which corresponds to an INVENTORY ITEM.

Obviously, an INVOICE ITEM cannot exist without an associated INVOICE, and over time the result will be zero-to-many receipts, or INVOICE ITEMs, for an INVENTORY ITEM. ¦Each PRODUCT is associated with INVENTORY ITEMs. ¦A SALE indicates that Hoosier sells one or more ITEM SALEs, each of which corresponds to a PRODUCT. An ITEM SALE cannot exist without an associated SALE, and over time the result will be zero-to-many ITEM SALEs for a PRODUCT. Figure 7-10 shows an E-R diagram with the entities and relationships previously described. We include on this diagram two labels for each relationship, one to be read in either relationship direction (e. . , an INVOICE Includes one-to-many INVOICE ITEMs, and an INVOICE ITEM Is_included_on exactly one INVOICE). Now that we understand the entities and relationships, we must decide which data elements are associated with the entities and associative entities in this diagram. You may wonder at this point why only the INVENTORY data store is shown in Figure 7-9 when seven entities and associative entities are on the E-R diagram. The INVENTORY data store corresponds to the INVENTORY ITEM entity in Figure 7-10. The other entities are hidden inside other processes for which we have not shown lower-level diagrams.

In actual requirements structuring steps, you would have to match all entities with data stores: Each data store represents some subset of an E-R diagram, and each entity is included in one or more data stores. Ideally, each data store on a primitive DFD will be an individual entity. To determine data elements for an entity, we investigate data flows in and out of data stores that correspond to the data entity and supplement this information with a study of decision logic that uses or changes data about the entity. Six data flows are associated with the INVENTORY data store in Figure 7-9.

The description of each data flow in the project dictionary or repository would include the data flow’s composition, which then tells us what data are flowing in or out of the data store. For example, the Amounts Used data flow coming from Process 2. 0 indicates how much to decrease an attribute STOCK_ON_HAND due to use of the INVENTORY ITEM to fulfill a customer sale. Thus, the Amounts Used data flow implies that Process 2. 0 will first read the relevant INVENTORY ITEM record, then update its STOCK_ON_HAND attribute, and finally store the updated value in the record.

Each data flow would be analyzed similarly (space does not permit us to show the analysis for each data flow). After having considered all data flows in and out of data stores related to data entities, plus all decision logic related to inventory control, we derive the full E-R diagram, with attributes, shown in Figure 7-11. In Visio, the ITEM SALE, RECIPE, and INVOICE ITEM entities participate in what are called identifying relationships. Thus, Visio treats them as associative entities, not just the RECIPE entity. Visio automatically includes the primary keys of the identifying entities as primary keys in the identified (associative) ntities. Also note that in Visio, because it cannot represent many-to-many relationships, there are two mandatory relationships on either side of RECIPE. FIGURE 7-11 Final E-R Diagram for Hoosier Burger’s Inventory Control System PVF WebStore: Conceptual Data Modeling Conceptual data modeling for an Internet-based electronic commerce application is no different from the process followed when analyzing the data needs for other types of applications. In the last chapter, you read how Jim Woo analyzed the flow of information within the WebStore and developed a data-flow diagram.

In this section, we examine the process he followed when developing the WebStore’s conceptual data model. Conceptual Data Modeling for Pine Valley Furniture’s WebStore To better understand what data would be needed within the WebStore, Jim Woo carefully reviewed the information from the JAD session and his previously developed data-flow diagram. Table 7-2 summarizes the customer and inventory information identified during the JAD session. Jim wasn’t sure whether this information was complete but knew that it was a good starting place for identifying what information the WebStore needed to capture, store, and process.

To identify additional information, he carefully studied the level-0 DFD shown in Figure 7-12. In this diagram, two data stores—Inventory and Shopping Cart—are clearly identified; both were strong candidates to become entities within the conceptual data model. Finally, Jim examined the data flows from the DFD as additional possible sources for entities. Hence, he identified five general categories of information to consider: ¦Customer ¦Inventory ¦Order ¦Shopping Cart ¦Temporary User/System Messages TABLE 7-2: Customer and Inventory Information for WebStore Corporate Customer

Home Office Customer Student Customer Inventory Information Company name Company address Company phone Company fax Company preferred shipping method Buyer name Buyer phone Buyer e-mail Name Doing business as (company’s name) Address Phone Fax E-mail Name School Address Phone E-mail SKU Name Description Finished product size Finished product weight Available materials Available colors Price Lead time After identifying these multiple categories of data, his next step was to define each item carefully. He again examined all data flows within the DFD and recorded each one’s source and destination.

By carefully listing these flows, he could move more easily through the DFD and understand more thoroughly what information needed to move from point to point. This activity resulted in the creation of two tables that documented Jim’s growing understanding of the WebStore’s requirements. The first, Table 7-3, lists each of the data flows within each data category and its corresponding description. The second, Table 7-4, lists each of the unique data flows within each data category. Jim then felt ready to construct an entity-relationship diagram for the WebStore. FIGURE 7-12 Level-0 Data-Flow Diagram for the WebStore

He concluded that Customer, Inventory, and Order were all unique entities and would be part of his E-R diagram. Recall that an entity is a person, place, or object; all three of these items meet this criteria. Because the Temporary User/System Messages data were not permanently stored items—nor were they a person, place, or object—he concluded that this should not be an entity in the conceptual data model. Alternatively, although the shopping cart was also a temporarily stored item, its contents needed to be stored for at least the duration of a customer’s visit to the WebStore and should be considered an object.

As shown in Figure 7-12, Process 4, Check Out Process Order, moves the Shopping Cart contents to the Purchasing Fulfillment System, where the order details are stored. Thus, he concluded that Shopping Cart—along with Customer, Inventory, and Order—would be entities in his E-R diagram. TABLE 7-3: Data Category, Data Flow, and Data-Flow Descriptions for the WebStore DFD Data Category Data Flow Description Customer Related Customer ID Unique identifier for each customer (generated by Customer Tracking System)  Customer Information Detailed customer information (stored in Customer Tracking System) Inventory Related

Product Item Unique identifier for each product item (stored in Inventory Database)  Item Profile Detailed product information (stored in Inventory Database) Order Related Order Number Unique identifier for an order (generated by Purchasing Fulfillment System)  Order Detailed order information (stored in Purchasing Fulfillment System)  Return Code Unique code for processing customer returns (generated by/stored in Purchasing Fulfillment System)  Invoice Detailed order summary statement (generated from order information stored in Purchasing Fulfillment System)  Order Status Information

Detailed summary information on order status (stored/generated by Purchasing Fulfillment System) Shopping Cart Cart ID Unique identifier for shopping cart Temporary User/System Messages Product Item Request Request to view information on a catalog item Purchase Request Request to move an item into the shopping cart View Cart Request to view the contents of the shopping cart Items in Cart Summary report of all shopping cart items Remove Item Request to remove item from shopping cart Check Out Request to check out and process order The final step was to identify the interrelationships between these four ntities. After carefully studying all the related information, he concluded the following: 1. Each Customer owns zero-to-many Shopping Cart Instances; each Shopping Cart Instance is-owned-by one-and-only-one Customer. 2. Each Shopping Cart Instance contains one-and-only-one Inventory item; each Inventory item is-contained-in zero-to-many Shopping Cart Instances. 3. Each Customer places zero-to-many Orders; each Order is-placed-by one-and-only-one Customer. 4. Each Order contains one-to-many Shopping Cart Instances; each Shopping Cart Instance is-contained-in one-and-only-one Order.

With these relationships defined, Jim drew the E-R diagram shown in Figure 7-13. Through it, he demonstrated his understanding of the requirements, the flow of information within the WebStore, the flow of information between the WebStore and existing PVF systems, and now the conceptual data model. Over the next few hours, Jim planned to refine his understanding further by listing the specific attributes for each entity and then compare these lists with the existing inventory, customer, and order database tables. He had to make sure that all attributes were accounted for before determining a final design strategy.

TABLE 7-4: Data Category, Data Flow, and the Source/Destination of Data Flows within the WebStore DFD Data Category Data Flow From/To Customer Related Customer ID From Customer to Process 4. 0 From Process 4. 0 to Customer Tracking System From Process 5. 0 to Customer Customer Information From Customer to Process 5. 0 From Process 5. 0 to Customer From Process 5. 0 to Customer Tracking System From Customer Tracking System to Process 4. 0 Inventory Related Product Item From Process 1. 0 to Data Store D1 From Process 3. 0 to Data Store D2 Item Profile

From Data Store D1 to Process 1. 0 From Process 1. 0 to Process 2. 0 From Process 2. 0 to Data Store D2 From Data Store D2 to Process 3. 0 From Data Store D2 to Process 4. 0 Order Related Order Number From Purchasing Fulfillment System to Process 4. 0 From Customer to Process 6. 0 From Process 6. 0 to Purchasing Fulfillment System Order From Process 4. 0 to Purchasing Fulfillment System Return Code From Purchasing Fulfillment System to Process 4. 0 Invoice From Process 4. 0 to Customer Order Status Information From Process 6. 0 to Customer

From Purchasing Fulfillment System to Process 6. 0 Shopping Cart Cart ID From Data Store D2 to Process 3. 0 From Data Store D2 to Process 4. 0 Temporary User/System Messages Product Item Request From Customer to Process 1. 0 Purchase Request From Customer to Process 2. 0 View Cart From Customer to Process 3. 0 Items in Cart From Process 3. 0 to Customer Remove Item From Customer to Process 3. 0 From Process 3. 0 to Data Store D2 Check Out From Customer to Process 4. 0 FIGURE 7-13 Entity-Relationship Diagram for the WebStore System Selecting the Best Alternative Design Strategy

Selecting the best alternative system involves at least two basic steps: (1) generating a comprehensive set of alternative design strategies and (2) selecting the one that is most likely to result in the desired information system, given all of the organizational, economic, and technical constraints that limit what can be done. A system design strategy represents a particular approach to developing the system. Selecting a strategy requires you to answer questions about the system’s functionality, hardware and system software platform, and method for acquisition.

We use the term design strategy in this chapter rather than alternative system because, at the end of analysis, we are still quite a long way from specifying an actual system. This delay is purposeful because we do not want to invest in design efforts until some agreement is reached on which direction to take the project and the new system. The best we can do at this point is to outline, rather broadly, the approach we can take in moving from logical system specifications to a working physical system. The overall process of selecting the best system strategy and the deliverables from this step in the analysis process are discussed next.

Design strategy A particular approach to developing an information system. It includes statements on the system’s functionality, hardware and system software platform, and method for acquisition. The Process of Selecting the Best Alternative Design Strategy Systems analysis involves determining requirements and structuring requirements. After the system requirements have been structured in terms of process flow and data, analysts again work with users to package the requirements into different system configurations.

Shaping alternative system design strategies involves the following processes: ¦Dividing requirements into different sets of capabilities, ranging from the bare minimum that users would accept (the required features) to the most elaborate and advanced system the company could afford to develop (which includes all the features desired across all users). Alternatively, different sets of capabilities may represent the position of different organizational units with conflicting notions about what the system should do. Enumerating different potential implementation environments (hardware, system software, and network platforms) that could be used to deliver the different sets of capabilities. (Choices on the implementation environment may place technical limitations on the subsequent design phase activities. ) ¦Proposing different ways to source or acquire the various sets of capabilities for the different implementation environments. In theory, if the system includes three sets of requirements, two implementation environments, and four sources of application software, twenty-four design strategies would be possible.

In practice, some combinations are usually infea-sible, and only a small number—typically three—can be easily considered. Selecting the best alternative is usually done with the help of a quantitative procedure, an example of which comes later in the chapter. Analysts will recommend what they believe to be the best alternative, but management (a combination of the steering committee and those who will fund the rest of the project) will make the ultimate decision about which system design strategy to follow.

At this point in the life cycle, it is also certainly possible for management to end a project before the more expensive phases of system design or system implementation and operation are begun. Reasons for ending a project might include the costs or risks outweighing the benefits, the needs of the organization having changed since the project began, or other competing projects having become more important while development resources remain limited. Generating Alternative Design Strategies The solution to an organizational problem may seem obvious to an analyst.

Typically, the analyst is familiar with the problem, having conducted an extensive analysis of it and how it has been solved in the past. On the other hand, the analyst may be mo

Free Essays

The Application of Vygotsky’s Theory to the Design

2. Why does learning require disequilibrium according to Piaget? Provide an example of how teachers can create discrepant events. 3. What is the Zone of Proximal Development in Vygotsky’s thought? Do you think it is a good model of learning? Why or why not? The Application of Vgotsky’s Social Development Theory to the Designing of a School Curriculum Christina Nardone: 102150672 Educational Psychology 02-46-324-01 Assignment A: Conceptual Comment University of Windsor Instructor: Anoop Gupta October 1st, 2012

Lev Vgotsky’s theories have become central to understanding cognitive development and have influenced many research initiatives in the past couple years. Social interaction and culture are thought to be the back bone of learning in his theory of social development, where he argues that social learning tends to occur before development (McLeod, 2007). This theory is one of the foundations for Constructivism, which can be defined as an active learning process, in which new knowledge is built on previous knowledge (Hoover, 1996).

An important component of social development theory is the Zone of Proximal Development (ZPD). It has been defined as “the distance between the actual developmental level as determined by independent problem solving, and the level of potential development as determined through problem solving under adult guidance, or in collaboration with more capable peers” (Vygotsky, 1978, p. 90). According to Vygotsky, learning occurs in this zone.

Scaffolding is a technique related to the ZDP in that the adult or peer adjusts their level of help to the learner depending on their performance in the task. (Young, 1993). These components of social development theory would be useful in designing an educational curriculum. Schools should retire the instructional approach to teaching and adopt a more interactive approach so that students can be actively involved in their learning. Incorporating scaffolding techniques to their lectures would be beneficial as well as adding some collaborative learning tasks with their peers.

The focus of learning should be on how well students have developed their problem solving skills, not just how much information they have learned. Also, testing and assessment should take into account the zone of proximal development; two children could have the same actual levels of development but different potential levels of development, which one child could be more capable than another child in completing many more complicated tasks. Works Cited Hoover, W. A. (1996, August 3rd). The Practice Implications of Constructivism.

Retrieved September 30th, 2012, from SEDL: Southwest Educational Development Laboratory: http://www. sedl. org/pubs/sedletter/v09n03/practice. html McLeod, S. (2007). Lev Vgotsky. Retrieved september 30th, 2012, from Simply Psychology: http://www. simplypsychology. org Vygotsky, L. (1978). Mind in Society: The development of higher psychological processes. Cambridge: Harvard University Press. Young, M. (1993). Instructional design for situated learning. Educational Technology Research and Development, 41 (1).

Free Essays

Systems Analysis and Design Case Study Chapter 4

Hoosier Burger a. How was the Hoosier Burger project identified and selected? What focus will the new system have? The Hoosier burger project was identified through its short-comings by the Mellankamps. The project was selected as the business grows and demand is at an all-time high, the current systems at Hoosier Burger are not getting the job done. This is causing customer discontent and is affecting business negatively. The new system is going to be heavily focused on inventory control systems.

While other systems of Hoosier Burger will be looked at, an improved inventory control system will greatly increase productivity for the Mellankamps. b. Identify the Hoosier Burger project’s scope. The Hoosier Burger project’s scope is to implement new systems in inventory control, customer ordering, and management reporting systems. This project is set up to increase the overall effectiveness by introducing new and improved systems. Alternatively, a new point-of-sale system may be within the scope of this project as well. Petrie’s Electronics 1.

Look over the scope statement. If you were an employee at Petrie’s Electronics, would you want to work on this project? Why or why not? As an employee of Petrie’s Electronics, I would want to be on this project team. The project itself is being put together with the primary goal of increasing the amount of customers the frequent Petrie’s Electronics. As an employee of almost any title with in the company, increased customer base is equally important to everyone. Sales associates will make more sales, managers will increase their monthly numbers, profits will rise, and as the tores become busier, all positions will be in full demand and lay-offs would be less likely in a thriving business. If I had the opportunity to be on the team I would, and I would want to increase all odds of the projects success. 2. If you were part of the management team at Petrie’s Electronics, would you approve the project outlined in the scope statement? What changes, if any, need to be made to the document? As part of management, I would approve of the current scope statement. The statement clearly outlines what the goals of the project are in the Project Overview section.

This overview is then broken down into individual objectives needed to be completed in an effort for the project to meet its goals. The only thing I would like added to the scope statement would be some kind of expected outcome. Obviously the goals are increased profits by creating a customer loyalty program. What could be added is what the project is expected to cost and how much of an increase would be expected after the implementation of the project. These estimates could be easily attained by researching other companies before/after their customer loyalty programs. 3.

Identify a preliminary set of tangible and intangible costs you think would occur for this project and the system it describes. What intangible benefits do you anticipate for the system? Tangible Costs: cost of project team, cost of implementing the project (rewards cards, rewards tracking software, rewards points redeemables) Intangible Costs: operational inefficiency, employee moral due to increased workload Intangible Benefits: customer loyalty, store reputation, competitive necessity 4. What do you consider to be the risks of the project as you currently understand it?

Is this a low-medium-or high-risk project? Justify your answer. Assuming you were part of Jim’s team, would you have any particular risks? I think one of the biggest risks of this project is time. With having busy team members on the project, getting things done and on schedule is going to be the most difficult part of the project. Overall, I would assess this project as a low or medium risk project. Historically, the trends in customer loyalty programs in the retail industry are huge. This programs do everything that Jim’s team is set out to do.

Spending enough research time into other companies’ rewards programs make this a rather easy project to streamline. As a member of the team, my assumed risks would be not being able to perform my duties as an employee of Petrie’s Electronic and as a member of the project team. If I am unable to perform these duties, it could negatively affect the security of my job with the company. 5. If you were assigned to help Jim with this project, how would you utilize the concept of incremental commitment in the design of the baseline project plan?

Jim outlined some objectives in the scope statement for this project. After each of these objectives have been tackled and overcome, I would utilize incremental commitment to review what has just been accomplished, what is left to be accomplished and whether or not the project team is meeting its goals and if those goals are still in line with the companies goals. 6. If you were assigned to Jim’s team for this project, when in the project schedule (in what phase of after which activities are completed) do you think you could develop an economics analysis of the proposed system?

What economic feasibility factors do you think would be relevant? After each of the objectives in Jim’s scope statement have been addressed, that is answered on paper with how they plan on accomplishing the task, would be a good time to assess economic analysis. At this time, there would be a clear understanding of what should be needed to address each objective successfully and analyzing the economic feasibility at this point would be much clearer than before. Relevant Economic feasibility factors:

One Time Costs such as system development cost and hardware/software cost Recurring Costs such as data storage costs, issuing customer reward card cost, and redeeming points for rewards cost 7. If you were assigned to Jim’s team for this project, what activities would you conduct in order to prepare the details of the baseline project plan? Explain the purpose of each activity and show a timeline or schedule for these activities. First, access all feasibilities of the project. If the project is not going to be feasible then it needs to be cut off right away.

Accessing feasibilities up front will help make the project is worth it. * Economic Feasibility * Making sure the company has the money to fund the project and that the overall result of the project will aid in increasing profits for the company * Technical Feasibility * Outline what technologies would be needed to make this project successful and to make sure that the company either has access to these technologies and/or is willing to acquire these technologies. * Operational Feasibility * Assess whether or not the project’s goals are realistic.

If the project’s goals are unrealistic then it’s a waste of money. Attainable goals are important. * Schedule Feasibility * Can this project be completed in a timely manner in which the company will benefit the most from the project? * Legal and Contractual Feasibility * Will implementing this project break any laws or contracts that the company is bound by? * Political Feasibility * Make sure that stakeholders understand the risk and rewards of this project. Once all feasibilities have been accessed, its time outline management issues.

A plan needs to be set in place that details what all team members are responsible for and what the reporting procedures will be. This is important so that project time isn’t wasted on simple things such as figuring out how deliverables will be evaluated and what specific issues the team may face during the project. Now the system description should be written. This section will clearly mark what the project team’s system plans to deliver. This is also a good time to come up with an alternate system. Finally, the introduction of the Baseline Project Report will be written.

This section will provide an overview of the entire project addressing the issues facing the project and how their proposed system will handle the issues. 8. Once deployed, what are the operational risks of the proposed system? How do you factor operational risks into a system development plan? The operational risks of this project would be that the loyalty rewards program isn’t enticing enough to keep the customer loyal to Petrie’s Electronics. On the other hand, if the program is overly enticing to the customer, this may lead to a much higher cost of maintaining the program for the foreseeable future.

Throughout the development of the system, there should be applied incremental commitment. This will continuously analyze and assess where the project is at and how it can meet the goals of the company. Operational risk is something that should be addressed during each of these assessments. At some point if the risk outweighs the reward, then the project needs to be shut down. If the risk is kept in check, the project can continue until the next assessment after a particular activity or phase.

Free Essays

Precis: Graphic Design Theory “Design and Reflexivity”

Precis: Graphic Design Theory “Design and Reflexivity” by Jan van Toorn, 1994. Verbal and Visual Rhetoric, University of Baltimore Publication Design Master’s Program, Spring, 2011 Dutch graphic designer Jan van Toorn is known for his radical ideas about what the function of design should be, and what qualities designers should possess and promote with their designs. Van Toorn’s distinctive style is messy, peculiar, and deeply interwoven with political and cultural messages, unapologetic with their intent to force critical thinking upon viewers.

Van Toorn advocates design which encourages the viewer to reach their own conclusions, insisting that designers shouldn’t function as objective bystanders, but instead, designers have an important contribution to make. Design is a form of visual journalism and van Toorn urges designers to take responsibility for their role as “journalists. ” Van Toorn begins his argument by stating that all professions contain a certain level of schizophrenia––inescapable contradictions, including graphic design, which must balance the interest of the public with the interests of the client and the general expectations of the media profession.

To survive, design must “strive to neutralize these inherent conflicts of interest by developing a mediating concept aimed at consensus […. ] to accepting the world image of the established order as the context for its own action. ” (Page 102, first paragraph) By reconciling the differences of various ideals and opinions, and establishing a cultural norm, design develops a “practical and conceptual coherence” in mass media, thereby legitimizing itself––legitimized “in the eyes of the social order, which, in turn is confirmed and legitimized by the contributions that design make to symbolic production. (Page 102, second paragraph) The cultural industry, comprised of corporations, the wealthy, the educated, and the powerful elite, dictate to the rest of society what is popular, distasteful, and overall socially acceptable, imprisoning design in a false sense of reality. Design becomes stagnant as it conforms to the ideals put forth by the ruling class. Van Toorn refers to this stagnation as “intellectual impotence” and designers tend to deal with it in two ways.

Designers either resist the assimilation into popular culture by attempting to redefine or “renew the vocabulary” or they integrate smoothly into the “existing symbolic and social order. ” (Page 103, first paragraph) The lines separating these two approaches have become blurred with the rise of post-modernism and proliferation of niche marketing, as competitors try to distinguish themselves. Van Toorn observes that “official design continues to be characterized by aesthetic compulsiveness and/or by a patriarchal fixation or reproductive ordering. (Page 103, second paragraph) Van Toorn then begins to examine what he refers to as “symbolic productions,” specifically ads, commercials, etc. , which misrepresent reality. These symbolic productions are ideological instruments, serving private interest in the guise of a universal one. (Page 103, last paragraph) The so-called “dominant culture” doesn’t serve to integrate different social classes; rather, it contributes to the facade of an integrated society, by forcing all other cultures to define themselves by an established set of rules, fostering a “communicative dependency. (Page 104, first paragraph) Van Toorn argues that everyday life is falsely represented and causes tension between ethics and symbolism. In order to make what van Toorn refers to as an “oppositional cultural production,” the designer must take care not to create a specific alternative to an established convention, but to simply present it in a creative and new way, while keeping the universally accepted concept intact.

A designer’s opportunity to upset the status quote can only be sought when a political or ideological shift is underway, which results in “creating new public polarities,” usually targeting real social problems. (Page 104, last paragraph) Now the designer can encourage an oppositional stance, one that goes against the communicative order. The ultimate goal of this approach is to evoke questions and reflection among the public and encourage a more pragmatic view of reality, forcing them to identify their own needs and desires.

Van Toorn cautions that despite the ever-changing nature of culture, design has to be “realistic in its social ambitions. ” (Page 105, paragraph 3) The awareness of the unstable relationship between the symbolic and the real world requires a high level of discernment and critical thinking ability. Design must recognize “substance, program, and style as ideological constructions, as expressions of restricted choices that only show a small sliver of reality in mediation. ” (Bottom of page 105, to top of page 106)

Free Essays

Research Paper – Pawnshop System Design

Bulacan State University Sarmiento Campus City of San Jose Del Monte Bulacan Research Methodologies Pawnshop System Design (PSSD) Submitted by: _______________________ Submitted to: _______________________ Instructor Date: March 25, 2011 CHAPTER I The Problem and its background INTRODUCTION The fusion of computer technology and communication technology gave birth to new era of digital age (William Sayer, 2003). This fusion is what we know today as information technology. Information technology is the collaboration of industries dealing with computer, telephone, and various handheld devices.

These technologies greatly affect the business industry. Pawnshop System (PSS) is an individual or business that offers secured loans to people, with items of personal property used as collateral. The word pawn is derived from the Latin pignus, for pledge, and the items having been pawned to the broker are themselves called pledges or pawns, or simply the collateral. The system is intuitive and easy to use. The Pawnshop System (PSS) if an item is pawned for a loan, within a certain contractual period of time the pawner may purchase it back for the amount of the loan plus some agreed-upon amount for interest.

The amount of time, and rate of interest, is governed by law or by the pawnbroker’s policies. If the loan is not paid (or extended, if applicable) within the time period, the pawned item will be offered for sale by the pawner/secondhand dealer. Unlike other lenders, though, the pawner does not report the defaulted loan on the customer’s credit report, since the pawnbroker has physical possession of the item and may recoup the loan value through outright sale of the item. The pawner/secondhand dealer also sells items that have been sold outright by customers to the Pawner or secondhand dealer.

STATEMENT OF THE PROBLEM General Problem The general problem in The Pawnshop System (PSS) is how long would it takes for saving data information of a client? Specific Problem 1. How long would it takes for saving data information of clients? 2. How Does a Pawnshop Operates? 3. What are the problems can be encountered by the cashier during their saving information about client? SIGNIFICANT OF THE STUDY The study will determine the affects of the Pawnshop System (PSS). this will gain benefits to client, Owner, employee.

Client: The Pawnshop System (PSS) will benefit them by assuring that they will b supplied with quality workers on time and rendered with efficient service. Someone who purchases or hires something from someone else. Employee: The Pawnshop System (PSS) will secure and maintain their record, keep their personal profile in case of incidence. And also they can save time and effort for saving data of client. Owner: The Pawnshop System (PSS) create accurate report that will help them make sound judgment in managing company. And also they can easily manipulate.

The Pawnshop System designs (PSSD) and help them to save any information about their client and employee. SCOPE AND LIMITATION The Pawnshop System (PSS) must have limitation one of the limitations is for the cashier or employee at least he/she is given an authority from the owner , and only the owner and cahier must have a right to open and use the Pawnshop System (PSS) by accessing their password. The study will benefit Pawnshop System (PSS) inc. as it provides qualified workers to their client ant efficiently manage their Pawnshop System (PSS). the System will reduce the incidence of incomplete information.

Gathered from applicant that will lead to inaccurate record on their database and unreliable reports. The material/ program going to use in this Pawnshop System (PSS) is the materials/ program to be use is the Microsoft Visual Basic 6. 0 and Microsoft Access 2003 or 2007. the reason why Microsoft visual Basic 6. 0 and Microsoft Access 2033 or 20007 will be use in this Pawnshop system, because only the Microsoft Visual basic 6. 0 and Microsoft access 2003 or 2007 will be able and compatible their connection and the relationship must be connected through.

The Pawnshop System (PSSD) . Must has receipt and it locate to the office administrator or in the office of cashier or employee. It will start to make on Dec 2009 until last week of February 2010 finish it already CONCEPTUAL FRAMEWORK The purpose of the study is to help the owner or cashier to save their time to manipulate the system. And they can easily manipulate this system. And also It helps the owner to understand easily the prototypes of the program and this system is easy to understand. Can generate different reports that will aid management in making business decisions.

The first frame is the Computer and software installer which is the Microsoft visual Basic 6. 0 and Microsoft Access 2003 or 2007. The second frame is data coding, system design, system analysis and installing software. The third frame is the Pawnshop system Design (PSSD). this will be the possible output of your crating and designing system. RESEARCH PARADIGM [pic] Fig 1. The Research model of the Experiment HYPOTHESIS The main idea of this Pawnshop system design is to help the owner to easily manipulate the program.

And it will help them to saving their data information that their going to inputted gathered by the client and employee record. TERMS AND DEFINITION Pawner/Pawnee- a member of an American Indian people living along the Platte River and its tributaries in Nebraska during the first half of the 19th century: confined to a reservation in the Indian Territory in 1874–75. Pawnshop- a shop where loans are made with personal property as security CHAPTER II LITERATURE REVIEW The country’s largest pawnshop chain, Cebuana Lhuillier Pawnshop, began as four pawnshop outlet in Metro Manila in the mid-1980s.

Cebuana Lhuillier Pawnshop today has branches spread all over the Philippines serving the Filipino pawner everywhere Cebuana Lhuillier Pawnshop traces its roots to Cebu. There, French Consul to the Philippines Henry Lhuillier established in 1935 his first of a chain of Agencias. He then opened several more branches in Cebu, as well as in nearby provinces of the Visayas. In 1968, Henry Lhuillier’s son Philippe Lhuillier went forth and opened the first Lhuillier pawnshop at Libertad Street in Malibay, Pasay under the trade name Agencia Cebuana.

As the years passed and with the support of hardworking personnel, several more branches were opened in Metro Manila as well as in Northern, Central and Southern Luzon. Soon branches sprouted in the south – in key provinces like Davao, Cagayan de Oro and Bukidnon. In 1987, the company pursued nationwide expansion. It then adopted the trade name Cebuana Lhuillier. Since then, every Philippe Lhuillier-owned pawnshop branch that opened anywhere in the Philippines carried the name Cebuana Lhuillier. Branches as far north as Aparri and as far south as General Santos were servicing the needs of over 25,000 customers a day.

Cebuana Lhuillier Pawnshop is the country’s largest pawnshop chain with branches in almost every city, town or district in the Philippines. “Walang Kapantay Magpahalaga” is the slogan that guides Cebuana Lhuillier in its everyday dealings with customers. The company takes pride in every opportunity where it has been able to live up to this commitment. CHAPTER III RESEARCH DESIGN |Treatment |Replication | |1. installing Microsoft Visual Basic | | |6. and Microsoft Access 2007 | | | |1 month | | | | |2. program designing | | |3. data coding | | Experimental complete Randomized design The experimental research may install Microsoft visual basic 6. 0 and Microsoft access 2007 for the designing and coding of the Pawnshop System Design system (PSSD).

CLUSTER SAMPLING by group |respondent |Population |Percentage | |Programmer |2 |34% | |Quality Assurance |2 |34% | | |1 |16% | |Team leader | | | |Documentation |1 |16% | |Total |6 person |100% | Experimental cluster sampling by group PROCEDURE IN GATHERING DATA

The pawnshop System Design will building by a total of six (6) people which is the 2 programmers, 2 Quality Assurance, 1 Team Leader, 1 Documentation. STATISTICAL TREATMENT |Gender |Population |Percentage | |Female |10 |48% | | Male |11 |52% | |Total |22 Person |100% | Using a statistical by group it computes the percentage of every population of this study. And the possible population that will going to use this experimental study. hey may the population that are going to pawn on this study. CHAPTER IV SUMMARY OF FINDING 1. How long would it take for saving data information of clients? 2. How a Pawnshop Does Operates? 3. What are the problems can be encountered by the cashier during their saving information about client? DESCUSSION OF RESULT There are several ways of collecting and understanding information and finding answer to your question research is one way. This study has dealt some basic issues of design in quantitive research’s have discussed the commonly used design types in experimental research.

If an item is pawned for a loan, within a certain contractual period of time the pawner may purchase it back for the amount of the loan plus some agreed-upon amount for interest. The amount of time, and rate of interest, is governed by law or by the pawnbroker’s policies. If the loan is not paid (or extended, if applicable) within the time period, the pawned item will be offered for sale by the pawnbroker/secondhand dealer CHAPTER V CONCLUSION In savings data information gathered by a client it will save in 1 second only. It saves time for the clients in processing their transaction.

It is very easy to operate because it must easy to understand. The problem that encountered by the user is deleting some data information gathered by the client in unexpected situation. BIBLIOGRAPHY Website: http://en. wikipedia. org/wiki/Pawnbroker GRAPHICAL VIEW OF THE STUDY Login Form Main Menu Client Form ———————– INPUT Computer Microsoft Visual Basic 6. 0 Microsoft Access 2003 or 2007 PROCESS Data coding System design System Analysis Installing Software OUTPUT Pawnshop System Design (PSSD) Pawnshop system design (PSSD)

Free Essays

Designing a Toasting Oven in Order to Produce Corn Flakes

Prof. Dr. Suat Ungan Fd. E. 425 Food Engineering Design Coordinator Middle East Technical University Food Engineering Department Ankara 06531 November 25, 2011 Dear Mr. Ungan, Please accept the accompanying Work Term Report, aimed designing a toasting oven in order to produce corn flakes. In the designed system 10 tons corn flakes per day is produced. After some processes, corn flakes enters the roasting oven at 20% humidity and exits at 4%humidity. The roasting oven can operate at (±10 ? C) 225 0C. Toasting oven is designed by considering its length, area and operating temperature.

Optimizations are done according to these factors on the cost of the total design. In the design system, rotary drum drier is used. 350 days of the year plant works and production occurs 16 hours in a day. Corn flakes enter the oven at 225 0C . Amount of air is calculated as 0,648 kg dry air/s . Length of the drier is calculated as 2. 27 m. in the result of optimizations done according to proper drying time and dryer diameter. Heat energy needed to raise the inlet temperature of air to 225 0C, is found as 157kw and heat loss is found as 23. 6kw.

Through these data, total investment which contains dryer cost and electricity cost is found as 92794. 98TL. Sincerely, group 3 members TABLE OF CONTENT SUMMARY In this design a rotary dryer is designed for drying of corn flakes which have the moisture content 20%. Corn flakes are dried with air 9 % moisture content. The production is done for 16 hours in a day and 10 tons corn flakes are produced per day. In production process, corn flakes are cooked under pressure. After cooking step, big masses are broken to pieces and sent to driers in order to get the moisture level at 20%. After this process, roduct is flaked between large steel cylinders and cooled with internal water flow. Soft flakes are sent to rotary dryers in order to dehydration to 4% final moisture content and toasting. In the toasting oven, flakes are exposed to 225 0C air for 2-3 min. The drier length is calculated as 2. 27 m with the diameter of 0. 082m with the assumption of 4%moisture content inlet air and 9%content outlet air. Flow rate of feed is calculated as 0. 206kg/s. Mass flow rate of the inlet air is calculated as 0,648 kg dry air/s. Energy needed for bring the temperature of air to 225 0C is calculated as 157kw and heat loss in the system is 23. kw. By making optimizations total capital investment is calculated as92794. 98TL which includes 84881TL electricity cost and 7913TL dryer cost. Finally by making optimizations, in order to have minimum length and suitable energy for the drier, 215 0C is chosen the best temperature for the inlet air. I. INTRODUCTION Rotary dryers potentially represent the oldest continuous and undoubtedly the most common high volume dryer used in industry, and it has evolved more adaptations of the technology than any other dryer classification. [1] Drying the materials is an important consumption process.

It is also one of the important parts in cement production process, and affects the quality and consumption of the grinding machine. Drum dryer is the main equipment of drying materials, it has simple structure, reliable operation, and convenient to manage. However there are some problems which are huge heat loss, low thermal efficiency, high heat consumption, more dust, and difficult to control the moisture out of the machine. It plays a significant role in improving drying technology level and thermal efficiency in drying process, reduce the thermal and production lost. 2] In this design we are asked to design a rotary drier which works 16 hours in a day and produces 10 tones corn flakes per day. Also it is mentioned that, corn flakes enters to drier at 20 %humidity and exits 3-5%humidity. This report is about designing a rotary dryer with its dimensions for considering to get the minimum total cost. Optimizations are done according to inlet temperature of the air to the drier. In the design system heat needed for heating the inlet temperatures and length of the rotary dryer as material cost is thought, and optimization is done by considering minimum total cost for the system.

II. PREVIOUS WORK Drying is perhaps the oldest, most common operation of chemical engineering unit operations. Over four hundred types of dryers have been reported in the literature while over one hundred distinct types are commonly available[3] Drying occurs by effecting vaporization of the liquid by providing heat to the wet feedstock. Heat may be supplied by convection (direct dryers), by conduction (contact or indirect dryers), radiation or by microwave. Over 85 percent of industrial dryers are of the convective type with hot air or direct combustion gases as the drying medium.

Over 99 percent of the applications involve removal of water. [3] * Rotary Dryer; All rotary dryers have the feed materials passing through a rotating cylinder termed a drum. It is a cylindrical shell usually constructed from steel plates, slightly inclined, typically 0. 3-5 m in diameter, 5-90 m in length and rotating at 1-5 rpm. It is operated in some cases with a negative internal pressure (vacuum) to prevent dust escape. Depending on the arrangement for the contact between the drying gas and the solids, a dryer may be classified as direct or indirect, con-current or counter-current.

Noted for their flexibility and heavy construction, rotary dryers are less sensitive to wide fluctuations in throughput and product size. [4] * Pneumatic/Flash Dryer;The pneumatic or ‘flash’ dryer is used with products that dry rapidly owing to the easy removal of free moisture or where any needed diffusion to the surface occurs readily. Drying takes place in a matter of seconds. Wet material is mixed with a stream of heated air (or other gas), which conveys it through a drying duct where high heat and mass transfer rates rapidly dry the product.

Applications include the drying of filter cakes, crystals, granules, pastes, sludge and slurries; in fact almost any material where a powdered product is required. * Spray Dryers; Spray drying has been one of the most energy-consuming drying processes, yet it remains one that is essential to the production of dairy and food product powders. Basically, spray drying is accomplished by atomizing feed liquid into a drying chamber, where the small droplets are subjected to a stream of hot air and converted to powder particles.

As the powder is discharged from the drying chamber, it is passed through a powder/air separator and collected for packaging. Most spray dryers are equipped for primary powder collection at efficiency of about 99. 5%, and most can be supplied with secondary collection equipment if necessary * Fluidised Bed Dryer; Fluid bed dryers are found throughout all industries, from heavy mining through food, fine chemicals and pharmaceuticals. They provide an effective method of drying relatively free flowing particles with a reasonably narrow particle size distribution.

In general, fluid bed dryers operate on a through-the-bed flow pattern with the gas passing through the product perpendicular to the direction of travel. The dry product is discharged from the same section. * Hot Air Dryer- Stenter; Fabric drying is usually carried out on either drying cylinders (intermediate drying) or on stenters (final drying). Drying cylinders are basically a series of steam-heated drums over which the fabric passes. It has the drawback of pulling the fabric and effectively reducing its width.

For this reason it tends to be used for intermediate drying * Contact Drying- Steam Cylinders/Can; This is the simplest and cheapest mode of drying woven fabrics. It is mainly used for intermediate drying rather than final drying (since there is no means of controlling fabric width) and for pre drying prior to stentering. * Infra red drying; Infrared energy can be generated by electric or gas infrared heaters or emitters. Each energy source has advantages and disadvantages.

Typically, gas infrared systems are more expensive to buy because they require safety controls and gas-handling equipment, but they often are less expensive to run because gas usually is cheaper than electricity. Gas infrared is often a good choice for applications that require a lot of energy. Products such as nonwoven and textile webs are examples where gas often is a good choice. [5] * III. DISCUSSION For the designed system a rotary drum dryer is chosen. Rotary drum dryer is used for drying material with humidity or granularity in the industries of mineral dressing, building material, metallurgy and chemical.

It has advantage of reasonable structure, high efficiency, low energy consumption[6]  advantages of drum dryer: | | Suitable for handling liquid or pasty feeds. Product is powdery, flaky form Uniform drying due to uniform application of film. Medium range capacities. Very High thermal efficiency Continuous operation Compact installation Closed construction is possible [7] By hot air stream, heat for Toasting of the flakes in the drier, or in the oven, is provided instead using flat baking surfaces. Depending on the production type and flow rate, drum dryer satisfies rotating at a constant speed, the slope and the length.

The drum is also perforated so that allows the air flow inside. The perforation should not too much large but also prevent the escape of flakes. Also, during the thermal treatment browning, expansion degree, texture, flavour, storage stability is determined. In order to obtain the correct values, the drying temperature and time should be adjusted properly. For the optimization of the system, length of the drier, diameter value, working temperature are affect fixed cost, variable cost and the heat loss from the system is considered.

First at all, changing by temperature how affect necessary length is calculated T air in| Z| 210| 2,308504| 215| 2,296091| 220| 2,284367| 225| 2,273274| 230| 2,262764| 235| 2,252792| It is seen that after temperature of the hot air increases, the necessary length of the system decreases . Due to decreasing of necessary length of the system , area decreases also , so fix cost is decreased (Money of dryer + installation) on the other hand according to table 6 T air in| Q system| electric cost| Area| money for cost of dryer + installation| total cost| 210| 146,708| 79222,32709| ,231014| 7949,192995| 87171,52| 215| 150,2011| 81108,57297| 1,224622| 7936,763821| 89045,34| 220| 153,6941| 82994,81886| 1,218584| 7925,023661| 90919,84| 225| 157,1872| 84881,06474| 1,212872| 7913,916768| 92794,98| 230| 160,6802| 86767,31062| 1,20746| 7903,393249| 94670,7| 235| 164,1733| 88653,5565| 1,202325| 7893,408318| 96546,96| TABLE 6 Q loss is increased , by temperature increase so variable cost(electric cost ) is increased also. owever, due to not big changing in the areas fix cost variable do not change too much by increasing or decreasing the temperature, but Q loss, on the other hand, makes too much difference by increasing or decreasing the temperature and also electrical cost for one kw/h is 0. 15 TL ,the difference of changing one temperature to other one is too big than fix cost. And according to data and tables, the optimum temperature is 2100C due to this reasons do not have a specific curve to us , the result is predicted as the minimum temperature. i. Assumptions * Working time of the plant is assumed as 16 hours Drying time is assumed as 150 seconds (optimum time is given as 2-3 minutes). * Surface temperature of the corn flakes entering the drier is assumed as 25oC(Tfeed=25oC) * Humidity of the air at the inlet and the outlet is assumed as 0. 04 and 0. 09, respectively. * Specific heat of the air is assumed as constant. ( cp,air=1. 02kj/kg*K) * Only the constant drying rate is considered in the calculations since it has a critical moisture of 4. 5-5. 2 %. [4] * The shape of the flakes is assumed as spherical. * Radius of dryer is taken as 0. 082 m The efficiency of the drier is assumed as 85% to realize the calculations. ii. Possible source of errors * The shape of the corn flakes may not be perfect spheres. * Calculations may be done improperly due to the air humidity assumptions. * The corn flakes may be stuck on each other. * IV. RECOMMENDED DESIGN 1. Drawing of proposed design 2. Tables Listing Equipment an Specifications Equipment| Specifications| Rotary Drum Dryer| Heating Medium: Hot Air * Temperature : 225 o C * Humidity in: 0. 04 kg water / kg dry air * Humidity out : 0. 09 kg water / kg dry airLength: 2. 27 mPeripheral Area: 1. 13 m2Material: Stainless SteelType: PerforatedProcessing time: 3 minutes or 150 seconds| TABLE 1 3. Tables for Material and Energy Balances T air, in (°C)| 210| 215| 220| 225| 230| 235| T air, out (°C)| 163. 67| 167. 57| 171. 48| 175. 37| 179. 27| 183. 16| Product rate (kg/s)| 0. 174| 0. 174| 0. 174| 0. 174| 0. 174| 0. 174| Feed rate (kg/s)| 0. 206| 0. 206| 0. 206| 0. 206| 0. 206| 0. 206| Mass of air (kg/s)| 0. 648| 0. 648| 0. 648| 0. 648| 0. 648| 0. 648| H in, air (kj/kg)| 226. 107| 231. 490| 236. 874| 242. 257| 247. 641| 253. 25| H out, air (kj/kg)| 192. 191| 196. 767| 201. 343| 205. 912| 210. 495| 210. 071| Q (kj/s)| 33. 916| 34. 724| 35. 531| 36. 339| 37. 146| 37. 954| Q loss (kj/s)| 22. 006| 22. 530| 23. 054| 23. 578| 24. 102| 24. 626| T feed in (°C)| 25| 25| 25| 25| 25| 25| T feed out (°C)| 46. 253| 46. 275| 46. 298| 46. 320| 46. 343| 46. 366| Z, length (m)| 2. 32| 2. 296| 2. 284| 2. 273| 2,263| 2. 253| A, peripheral area (m)| 1. 231| 1. 224| 1. 219| 1. 213| 1. 207| 1. 202| time (seconds)| 150| 150| 150| 150| 150| 150| TABLE 2 4. Process Economics According to 225oC QSYSTEM =157,18 kJ TEDAS ,for 1KW/hour electric , cost is 0. 5TL. ————————————————- Electric cost = QSYSTEM *3600*0,15 Eqn 19 Electric cost=84881,065TL ————————————————- Area =(2*? *r*z)+(2*? *r2) Eqn 20 AREA;=1,2128m2 For money cost dryer and installation a formula is found which is ————————————————- Cost = 5555,56+ 1944,44*area Eqn 21 money cost dryer and installation= 7913. 91TL ————————————————- Total cost = electric cost + money cost dryer +installation

EQN 22 Total cost=92794,98TL T air in| Q system| electric cost| area| money for cost of dryer + installation| total cost| 210| 146,708| 79222,32709| 1,231014| 7949,192995| 87171,52| 215| 150,2011| 81108,57297| 1,224622| 7936,763821| 89045,34| 220| 153,6941| 82994,81886| 1,218584| 7925,023661| 90919,84| 225| 157,1872| 84881,06474| 1,212872| 7913,916768| 92794,98| 230| 160,6802| 86767,31062| 1,20746| 7903,393249| 94670,7| 235| 164,1733| 88653,5565| 1,202325| 7893,408318| 96546,96| TABLE 6 FIGURE 1 FIGURE 2 V.

CONCLUSION AND RECOMMENDATIONS To sum up, the aim of this design project is to design a toasting oven for corn flakes to decrease its moisture content from %20 to 3-5 %. For this purpose, by using inlet temperature, humidity of air and inlet temperature and moisture content of corn flakes the system is designed. Moreover, during calculations length and radius of dryer, operating time, operating capacity and heat losses from the system is considered. After doing this calculation, the optimization done by altering the working temperature of the system and dryer radius and by considering heat losses from the system.

These alterations affect to the both variable and fixed costs and different fixed and variable cost values are obtained. Different total costs values are obtained by using fixed cost and variable cost values and optimization is done. Finally, it is conculed that the dryer length is 2. 27 m when inlet air temperature is 225 oC. However, optimum length is obtained when the inlet air temperature is 215 oC which is 2. 296 m by considering total cost for the system. As a result, theoretical calculations are integrated with practical approach and feasible system is designed for the problem.

As a recommendation, for the drying process of corn flakes other dryer types can be used. Fluidized bed dryer can be used for this process. There are some important advantages of this dyer. As an example, this type of dryer has very high thermal efficiency and low processing temperature can be used for the processing. [8] Moreover, the system should be controlled carefully, because any fluctuations in the temperature or other variables could made adverse effects. Temperature of the inlet air should be censored and color censor should be added to outlet of product to control the quality in a best way. VI. ACKNOWLEDGMENT Special thanks for their help and support to our instructors: Prof. Dr. Suat UNGAN Assist. Cem · BALTACIOGLU * VII. TABLE OF NOMENCLATURE xfeed = kg solid/kg feed xproduct = kg solid/kg product Xfeed = kg water/kg dry solid Xproduct = kg water/kg dry solid Humidity air in= kg water/kg dry air ? =density (kg/m3) Q =volumetric flow rate (m3/s) V=speed (m/s) D= diameter (m) g= gravitational acceleration (m/s2) Qloss = kJoule Hin = Kj /kg dry air hproduct = kJ/kg Gair = kg dry air/m2. s * VIII. REFERENCES [1] Retrieved on November 2011 from; http://www. process-heating. om/Articles/Drying_Files/d238aadb9d268010VgnVCM100000f932a8c0____ [2] Retrieved on November 2011 from; http://www. rotary-drum-dryer. com/Knowledge/2011-05-08/141. html [3] Retrieved on November 2011 from; http://www. energymanagertraining. com/bee_draft_codes/best_practices_manual-DRYERS. pdf [4] Retrieved on November 2011 from; http://www. barr-rosin. com/products/rotary-dryer. asp [5] Retrieved on November 2011 from; http://www. thinkredona. org/rotary-dryer [6] Retrieved on November 2011 from http://www. blcrushers. com/chanping/2011-08-17/111. html? gclid=CM39p73vxKwCFQkLfAodemc4rw [7] Retrieved on November 2011 from http://www. rrowhead-dryers. com/drum-dryer. html [8]retrieved on November 2011 from http://www. directindustry. com/prod/british-rema-processing-ltd/fluidized-bed-dryers-62696-580253. html * IX. APPENDIX SAMPLE CALCULATIONS Mass values and fractions data: Capacity = 10000 kg per day product As assumed working time = 16 hours per day Product flow rate = (10000kg/day)*(1day/16hours)*(1 hour/3600) Product flow rate=0,174 kg/s Feed flow rate = (0,174*0,95)/0,8 Feed flow rate= 0,206 kg/s Moisture content of feed = 0,2 kg water/kg feed Moisture content of product = 0,05 kg water/kg product xfeed = 0,8 kg solid/kg feed product = 0,95 kg solid/kg product Xfeed = 0,2/0,8(=0,2/0,8=0,25 kg water/kg dry solid) Xfeed= 0,25 kg water/kg dry solid Xproduct = 0,05/0,95(=0,05/0,95=0,053 kg water/kg dry solid) = 0,053 kg water/kg product Xproduct= 0,053 kg water/kg product Temperature & humidity data: Temperature of the air in = 225 oC Temperature of the feed = 25 oC Humidity air in = 0,04 kg water/kg dry air Humidity air out = 0,09 kg water/kg dry airH For finding G value, water balance is made as ————————————————- G*Hin + F*Xfeed/[(1+Xfeed)] = G*Hout + P*Xproduct

Eqn 1. G*0,04 + 0,206*[0,25/(1+0,25)] = G*0,09 + 0,174*[0,053/(1+0,053)] G= 0,648 kg dry air/s For finding energy balance, Hin , Qloss , Hout are calculated ————————————————- Hin = (1,005+1,88* Hin)*Tair,in Eqn2. (Material and Energy Balances in Food Engineering, Esin, A. 1993, p. 429) Hin = (1,005+1,88*0,04)*225 Hin = 242,25 kJ/kg dry air As efficiency is taken 85% ————————————————- Qloss = 0,15*Hin (85% efficiency) Eqn3. Qloss = 36,33 kJ/kg dry air ————————————————- Qloss in system = G*Qloss

Eqn4. Qloss in system = 0,648*36,456 Qloss in system = 23,578 kJ/s ————————————————- Hout = (1,005+1,88* Hout)*Tair,out Eqn5. (Material and Energy Balances in Food Engineering, Esin, A. 1993, p. 429) Hout = 1,1742*Tout Energy balance: ————————————————- G*Hin = G*Hout + Qloss Eqn6. 0,648*243,045 = 0,648*(1,1742Tair,out) + 23,626 Tout air = 175,369oC Use eqn 5. And Hout is found as Hout = 205,91 kJ/s ————————————————- Siebel’s Equation: 33,49*(H2O) + 837,4

Eqn 7. (Material and Energy Balances in Food Engineering, Esin, A. 1993 Eqn 5-33 p. 211) So , by using this equation cp,feed = 1,5 kJ/kg. oC cp,product = 0,98kJ/kg . oC ? feed = 1390 kg/m3 ————————————————- hfeed = cp,feed*Tfeed Eqn. 8 hfeed = 1,5*25 hfeed = 37,5 kJ/kg ————————————————- hproduct = cp,feed*Tproduct Eqn. 9 hproduct = 0,98*Tproduct Energy Balance: G*Hin + F*hfeed = G*Hout + P*hproduct + Qloss Eqn 10. 0,648*243,045 + 0. 206*37. = 0. 648*206. 59 + 0. 174*0. 98* Tproduct + 23. 63 Tproduct = 46,32 oC hproduct = =45,39 kJ/kg As mentioned, assumption of radius of dryer is taken 0. 082 m ————————————————- Gair = 0,648/(? *r2) Eqn. 11 Gair = 30,68 kg dry air/m2. s ————————————————- hair = 1,17*(Gair)0,37 Eqn. 12(Transport Process and Separation Process Principles, Geankoplis , Eqn 9-6-10 p. 583) hair= 4. 5 kj/ kg cp,air=1. 02kj/kg*K ————————————————- HTOG = (Gair*cp,air)/hair Eqn. 13 (Mass Transfer Operation, Treybal, p. 704) HTOG= 7. 535 Tair,in = 225 Tair,out = 175. 369 Tfeed = 25 Tproduct =46. 32 So TG is found by ————————————————- TG = Tair,in – Tair,out Eqn. 14 TG = 49. 06 ————————————————- TM = [(Tair,in – Tfeed) + (Tair,out – Tproduct)]/2 Eqn. 15 TM = 164,52 ————————————————-

NTOG = TG/TM Eqn. 16 NTOG = 0,301 ————————————————- z = NTOG*HTOG Eqn 17 z= 2,27 m ————————————————- QSYSTEM=Gair*Hin Eqn 18 =242,25*0,648 QSYSTEM =157,18 kJ TEDAS ,for 1KW/hour electric , cost is 0. 15TL. ————————————————- Electric cost = QSYSTEM *3600*0,15 Eqn 19 Electric cost=84881,065TL ———————————————— Area =(2*? *r*z)+(2*? *r2) Eqn 20 AREA;=1,2128m2 For money cost dryer and installation a formula is found which is ————————————————- Cost = 5555,56+ 1944,44*area Eqn 21 (Plant Design and Economics for Chemical Engineers, Max . S. Peters) money cost dryer and installation= 7913. 91TL ————————————————- Total cost = electric cost + money cost dryer +installation EQN 22 Total cost=92794,98TL

For finding changes due to increasing temperature to higher or lower ( ±10 ? C) from 225oC Humidityin and Humidityout are taken constant. Humidity air in = 0,04 kg water/kg dry air Humidity air out = 0,09 kg water/kg dry airH T air in| Hin| Q loss| Qloss in SYSTEM| Tair out| Hout| 210| 226,107| 33,91605| 22,00620197| 163,6782| 192,191| 215| 231,4905| 34,72358| 22,53015916| 167,5753| 196,7669| 220| 236,874| 35,5311| 23,05411635| 171,4724| 201,3429| 225| 242,2575| 36,33863| 23,57807354| 175,3695| 205,9189| 230| 247,641| 37,14615| 24,10203073| 179,2666| 210,4949| 235| 253,0245| 37,95368| 24,62598792| 183,1637| 215,0708| TABLE 4

Gair and h are constant , as I found before as hfeed = 37,5 kJ/kg and Gair =30,68 T air in| T product| h product| h air| h TOG| TG| TM| N TOG| z| 210| 46,25308| 45,32802| 4,152621| 7,535866| 46,32179| 151,2126| 0,306336| 2,308504| 215| 46,27571| 45,3502| 4,152621| 7,535866| 47,42469| 155,6498| 0,304688| 2,296091| 220| 46,29834| 45,37238| 4,152621| 7,535866| 48,52759| 160,087| 0,303133| 2,284367| 225| 46,32097| 45,39455| 4,152621| 7,535866| 49,63049| 164,5243| 0,301661| ,273274| 230| 46,3436| 45,41673| 4,152621| 7,535866| 50,73339| 168,9615| 0,300266| 2,262764| 235| 46,36623| 45,43891| 4,152621| 7,535866| 51,83629| 173,3987| 0,298943| 2,252792| TABLE 5 T air in| Q system| electric cost| area| money for cost of dryer + installation| total cost| 210| 146,708| 79222,32709| 1,231014| 7949,192995| 87171,52| 215| 150,2011| 81108,57297| 1,224622| 7936,763821| 89045,34| 220| 153,6941| 82994,81886| 1,218584| 7925,023661| 90919,84| 225| 157,1872| 84881,06474| 1,212872| 7913,916768| 92794,98| 230| 160,6802| 86767,31062| 1,20746| 7903,393249| 94670,7| 235| 164,1733| 88653,5565| 1,202325| 7893,408318| 96546,96| TABLE 6 FIGURE1 FIGURE 2 FIGURE 3 According to figures, most suitable temperature is 210oC by making optimization.

Free Essays

Methods of Measuring Design Quality

There are several methods by which companies measure the design quality of products, services and processes. The companies select the methods in accordance to their goals. Accordingly the methods for measuring design quality may be generic like the reusability of design or specific like the size dimensions of motion system. Whatever be the metrics used, there is a trend towards using a combination of weights and rating scales for objectively measuring design quality.

One approach to measuring design quality is examining the extent to which the different parts or the subsystems of the design depend on one another. From this stand point those designs that are inflexible, and have a high degree of interdependence cannot easily be changed.  When a change is introduced it has a surging effect and it becomes impossible to guess the cost of such a change. These designs create a situation where the managers very rarely allow changes.

Another way of assessing design quality is to test its reusability. If the needed parts of the design are highly dependent on other details, design is highly interdependent. In such a situation is important to see if the design can be used in a different situation or a combination. For example, if there is an embossing unit designed as a part of a plastic stretching line. Can the embossing unit be separated from the line and used in a different plastic stretching line? Continuing with the example, can the plastic extruder be separated from the line and used as a part of another plastic strapping line?

Finally, is there a tendency of the system to break in several places when a one change is made to the system? If the design is brittle then there are problems in areas that have no direct connection with the changed area (Akao, Yoji 2004). In the plastic stretching line if the embossing unit is removed there is a problem in the cooling unit. Such brittleness reduces the reliability of the design and creates maintenance problems. The production personnel cannot rely on the production specifications.

Another approach to measuring the quality of design is to examine the specifications of the designs in terms of the realization of its objectives. The cost of implementing the design and the ease with which the device can be produced are evaluated. A strong correlation is usually present between the design and the specifications and the ratio can be used as a reliable measure of design quality (Park, Sung H1996).

Another measure of design quality is to measure design performance(Belavendram, N 1995). In this case the design quality evaluates a number of factors like the craftsmanship, the cost of design, the cost of production, and even the return on investment of the design process. In case the design is made by an internal team there is even a comparison of the performance of the design with the expected performance from external designers.

Measuring the design quality has assumed new importance with how designs are being managed to increase value of the organization to its customers. Instead of tangible end products, there are companies that evaluate the designs of business models and improve its designs to ensure that every interaction with a customer is dependable and persuasive.

Design quality is also measured from the point of view of the user. The design is expected to make the process clear to the user. Moreover, the design should make the behavior of the organization, system or the process dependable to the user, Further; the design should be such that the process or the system should provide feedback. In case of interaction with the customer, the feedback should be both visual and audio. The message however, should be clear.

The design of a process or a system should be such that the user should effectively be able to trace the path of action (Hoyle, David 2005). There should be a close correspondence between the specifications that have been given to the user and the manner in which the system works. Finally, the design should allow for measures of control.

Measuring design quality is often a task of applying general principles of designing. The general principles of designing include questions like is the design trouble-free? An uncomplicated and simple design is preferred. Also is the design is long lasting? An adaptable design is desirable and so is a timeless design. The design should appeal to the future generation.

A good design solves the moot problem (Hayes, Bob1998). A good design gives a few elements to the users that can be combined by the users themselves. A lot of work goes into a high quality design and this is reflected in the design itself. One of the metrics widely used in measuring design quality is the use of symmetry. Further, the fine tuning that has been done to the design to improve its quality and performance. Quality design can be replicated and is different from the norm. Finally, good design is done in large pieces.

From the perspective of production management it is important to remember that the design quality is important in motion control system. In this context the quality of design embraces the selection of the motor drive electronics, positioning mechanism and motion controller.. Design quality emerges from the planning that goes into the development of the system. Designing quality entails full description and understanding of the process. Meticulous details go into this designing stage like the precision of the motion, the travel length of every axis and the number of axis.

A good quality design specifies if the positioning is rotary, linear or a combination of stages (Card, David N & Glass. R 1990). The quality of design is also evaluated by the manner in which it incorporates the stage as an integral part of the larger system. The ability of the stage to meet its specifications is also an important consideration in measuring design quality. The design also encompasses the way in which the system is mounted on a flat surface to avoid distortions. The quality of design is also judged from the way in which the lifetime requirements of the system are incorporated into the stage specifications.

If the requirements change then the system may have to be removed to a different position during its lifetime. Good quality design takes into consideration the size and the environmental consequences of the system. Both horizontal and vertical size constraints need to be considered. Factors like the choice of drive type, selection of motor and the mechanical and electrical aspects of the system motions are important factors in appraising the design quality.

In the context of customer service, measuring design quality means evaluating parameters that go into a better provision of service to the customers. To deliver consistently superior service requires a high level of design quality. The design must include processes, people and the technology. Only if the design is of high quality will the company get increased sales from customers that have experienced superior service.  The design often extends to aspects of information technology.

The designing of products and service responses based on data often are critical in attracting and retaining customers. The quality of design reflects on the services provided like tracking the choices of individual customers, payment methods, patterns of buying, support websites and live chats with technical staff. To be successful the design must consider factors like support technology, culture of the organization, incentive system, training and recruitment of customer support staff.

In most situations like a production setting or a customer service system, there are some metrics that are selected for measuring the design quality. Usually, these metrics are based on the objective of the organization and are discussed with the designer before the design commences (Wood, Jane & Silver, Denise 1989). For example, the company that wants to design a motion control system will discuss with its production engineers specifications that are required for the motion control system and agree on a few metrics that will be used to measure the design quality.

For example it may be the positioning of the linear rotary, the adaptability of the system, the size of the system, the stopping ability of the drive and precision of the description of the system. Each of these metrics for measuring design quality should be given a weight so that the sum of the weighs adds up to 1. For example, the stopping ability of the drive may be given a weight of 0.3 and the precision of the description of the system may be given a weigh of 0.1 and so on. In practice these weights are decided jointly by the management and the designer.

A document for measuring design quality typically has a rating scale of five attached to each metric. After the design is completed a rating is given to each metric. The rating may range from 1 to 5, where 1 is the lowest rating and 5 is the highest rating. Each rating is multiplied with the respective rating. For example the stopping ability of the drive may get a rating of 3, this figure is multiplied by its weight of 0.3 and we get a score of 0.9. The scores for each metric is added and a composite score is calculated. As the weights add up to 1, the composite score ranges from 1 to 5, with 1 being the lowest design quality and 5 the highest possible quality measure. The actual composite measure for design quality will range between 1 and 5.

To sum, there is a wide range of metrics used for measuring design quality. Some are generic metrics like the flexibility of the design, the adaptability or its brittleness. Others are metrics related to specific situations like the metrics for measuring the design quality of a motion system. Design quality of customer support systems or HRM recruitments systems are also measured with respect to the goals of the design. These metrics are then rated according a previously decided standard, weighted and a composite score is calculated to give a comprehensive measure of design quality.


Akao, Yoji (2004), Quality Function Deployment: Integrating Customer Requirements into Product Design, Productivity Press

Belavendram, N (1995) Quality by Design, Prentice Hall

Card, David N & Glass. R (1990) Measuring Software Design Quality. Prentice Hall

Hayes, Bob (1998) Measuring Customer Satisfaction: Survey Design, Use, and Statistical Analysis Methods, ASQ Quality Press

Hoyle, David (2005) ISO 9000 Quality Systems Handbook, Elsevier

Park, Sung H (1996) Robust Design and Analysis for Quality Engineering,


Wood, Jane & Silver, Denise (1989), Joint Application Design: How to Design Quality Systems in 40% less Time, John Wiley & Sons Inc





Free Essays

Instructional Design Theory According to M. David Merrill

There are many ways by which an educator can look at learning and the teaching process. M. David Merrill, Patricia Smith and Dr. Tillman J. Ragan are three educators who believe that instruction may be done more effectively with given the proper approach and pacing that students may be able to follow. Merrill sought to change the way instruction is done following theories of cognitive learning by integrating consideration for the learner’s background and requirements. Smith and Ragan on the other hand, favor an approach to instruction that is more audience centered and based on real-life experiences of their students.

Instructional Design Theory According to M. David Merrill, Patricia Smith and Tillman Ragan.

An educational psychologist, M. David Merrill has written numerous books and articles on the field of instructional technology and has taken part in the development of various educational materials including instructional computer products.  Merrill has been cited as being among the most productive Educational Psychologists (Gordon, 1984), among the most frequently cited authors in the computer-based instruction literature (Wedman, 1987), and ranked among the most influential people in the field of Instructional Technology (Moore & Braden, 1988).

A co-author of the leading book “Instructional Design,” Patricia Smith is an assistant professor at Cy-Fair College in the North Harris Montgomery Community College District. She holds a doctoral degree in Curriculum and Instruction from the Louisiana State University.

Smith’s co-author is Tillman J. Ragan, Ph.D. a Professor Emeritus on Instructional Psychology and Technology from the University of Oklahoma.

Basic Beliefs

Merrill is a proponent of the Component Display Theory or CDT.  Under CDT, learning is classified by two dimensions: content and performance.  Merrill lists four types of information that falls under “Content:”

1. Facts which consist of statements and information

2. Concepts that establish relationships between symbols and objects to form a single unit

3. Procedures or ordered/chronological steps required in problem solving

4. Principles that deal with causal relationships

Performance on the other hand refers to the way content is used by the learner.  Applications is demonstrated through remembering (information recall), using (practical application) and generalities (finding or development of new abstract concept from given information). CDT presents data in four major forms: rules, examples, recall and practice. Information is further qualified by secondary forms such as are prerequisites, objectives, help, mnemonics and feedback.

Merrill believes that based on CDT, effective instruction is achieved when it contains all necessary primary and secondary forms that a learner may use as standards. (Merrill, 1983)

The pace of learning is dictated by the accomplishment of the objectives of each task. Evaluation is limited to determining whether the criterion for that particular task is met.

What makes CDT different from other cognitive learning theories is that it takes into consideration the capabilities of the learner.  The presentation of information as well as the graduation to the next level/step is determined by what the learner has already accomplished. Also central to the concept of CDT is the empowerment of the learner wherein learners select by themselves their own instructional strategies.  Merrill believes that instructional material becomes highly individualized when done along the CDT guidelines.

While Merrill places huge emphasis on course structures rather than the lesson itself, Smith and Ragan believe that creating instructional material starts in determining the needs, experience and capabilities of its intended users.

“As you design instruction, it is critical that you have a particular audience in mind, rather than centering the design around the content and then searching for an audience that is appropriate” (Smith & Regan, 1999).

They believed that if an instructor knew about the learning background of their students as well as their capability of assimilating new information, he or she would be better equipped to speak or instruct the students in a way that they can understand.

In their book Smith and Ragan summarized thousands of studies in the hope of identifying which steps to take and instructional techniques to use to achieve each type of learning objective. Smith and Ragan also presented the ideas of authentic learning and case based learning.

“Authentic learning refers to the idea that learners should be presented problems from realistic situations and found in everyday applications of knowledge while case-based learning is based on using case studies to present learners with a realistic situation and require them to respond as the person who must solve a problem.” (Smith & Ragan, 1999).

Merrill for his part has presented a newer version of the CDT wherein advisor strategies have taken the place of learner control strategies.  Merrill also subscribes to a more macro view which gives more emphasis on course structures and instructional transactions rather than presentation forms. (Merrill, 1994)

Cognitive vs. Constructivist Learning

Merrill belongs to the theorists who based their ideas on cognitive learning. He believed a systematic and structural approach to learning by using repetition and consistency makes the instruction method more effective. The weakness in cognitive learning lies in its perceived inflexibility in adapting to new situations or methods or accomplishing things. Merrill sought to address this by proposing structured instruction tailored to the requirements and situation of the learner.

Smith and Ragan takes a more constructivist or individualistic approach where learning is based on interaction with real-life situations. Adjustment to new situations would be easier and the learner is capable of interpreting multiple realities and individual choice of method in solving a problem or accomplishing a task. The flaw in this design however is that there are situations wherein a degree of conformity is expected and “individual approaches” will not be acceptable.


Gordon, et al.  (1984, Aug/Sep). Educational Researcher. American Educational Research


Merrill, M.D. (1983). Component Display Theory. In C. Reigeluth (ed.), Instructional Design

Theories and Models.  Hillsdale, NJ: Erlbaum Associates.

Merrill, M.D. (1994). Instructional Design Theory. Englewood Cliffs, NJ: Educational Technology Publications.

Moore, D. M., & Braden, R. A. (1988, March).Prestige and influence in the field of educational

technology. Performance & Instruction 21(2): 15-23.

Smith & Ragan. (1999). Instructional Design (2nd ed.). New York: John Wiley & Sons, Inc.Young, M.

Wedman, J.M., Wedman, J.F., & Heller, M.O. (1987). A computer-prompted system for

objective-driven instructional planning. Journal of Computer-Based Instruction, 14 (1),




Free Essays

Product and Services Design

Design is one of the components of the operations management. Specifically, product and service design is one of the processes of the design.

As states in Morris (2009, p.22), Product design is defined as the idea generation, concept development, testing and manufacturing or implementation of a physical object or service.

“Service design is the activity of planning and organizing people, infrastructure, communication and material components of a service, in order to improve its quality, the interaction between service provider and customers and the customer’s experience. Service design methodologies are used to plan and organize people, infrastructure, communication and material components used in a service. The increasing importance and size of the service sector, both in terms of people employed and economic importance, requires services to be accurately designed in order for service providers to remain competitive and to continue to attract customers.” (Morelli, 2002, p.3-17)

According to Slack, N., Chambers, S. & Johnston, R. (2010, p. 113-134), good products and services design is important for both companies and its customers. It fulfils the customers’ wants from the product and service design and also generates the profit for the companies. The performance of the product and service design is measured by its quality, speed, dependability, flexibility and cost. The stages of product and service design include concept design, concept screening, preliminary design, evaluation & improvement and prototype and final design. All of these stages finally run out a fully developed product. As a result, a concept, a package and a process is designed in the product and service design.

“A concept is the understanding of the nature, use and value of the service or product; a package of ‘component’ products and services that provide those benefits defined in the concept; the process defines the way in which the component products and services will be created and delivered.” (Slack, N., Chambers, S. & Johnston, R., 2010)

Free Essays

Social Life of Small Urban Space

It has been approved that people like to get involved in social life. They are interested of being a part of the universal. The study of Whyte agrees with that. Observing what other people are doing is a valuable tool used by the majority of people to understand the behavior of others. Public plaza is a good example to practice that so when isolating people and not allowing them to observe, the public place will not have any meaning. Forcing people to sit in certain way without any connection with other activity is boring so people try to avoid it (Almansoour).

The direct area in front of a building should always communicate with the buildings form, entryways and design style. A building that lacks communication with the street level will be perceived of as cold and uninviting (Perry). The study of deferent plazas in New Yurok city by Whyte shows that people tend to sit whenever there is a place to sit. If a plaza is close to the street or in front of public place such as the library it becomes more occupied then others. The study indicates that observation is an important key when design a plaza(Almansoour).

The amount of sitting area as well as width of sitting area should be adjusted based on context that urban part is in (Hirose). Also, it has become necessary to characterize open spaces in areas that have more density. As long as the open space is planned in the right place within its area it can provide such a positive effect to those people who use them (Alotaibi). Variety of factors would effects whole operation of designing sitting area such as occupants’ moral, culture, life style, physical size, and combination of above.

Urban spaces are mixed areas of characteristic yet they have some distinguished characteristics from other spaces, therefore I agree with flexible zoning ordinances on designing sitting area in urban park (Hirose). Regulation is uncomfortable, but it may provide a more uniform approach to design. Why shouldn’t every building have plentiful and inviting exterior sitting spaces? But what would that regulation look like. The author’s data seems a bit confused.

Analyzing light, square area, and open spaces did not seem to direct any relative findings. Even their data on amount of seating did not perform as the authors would have us believe. Plazas with large amount of seating were still often underused (Perry). Finally, the difference between good plaza design and poor plaza design is a combination of personal experience and trial and error. A designer may have good intentions, but if everything is to not look the same a designer has to be given the opportunity to experiment (Perry).

Free Essays

The Most Influential Designers of the Century

Paul Poiret (1879 – 1944) is best known for liberating women from corsets. Lacking certain technical dressmaking skills Poiret made draping the focal point of his designs. He was interested in simple shapes that freed the body and being inspired by Fauvism, Japanese culture and the Ballet Russes mostly used exotic colours. He was the king of Oriental Era in 1910’s and a natural businessman. He expanded limits of what fashion meant at the time and brought some serious innovations to the industry. Kimono coat, “hobble” skirt, “lampshade” tunics, “harem” pantaloons are all signature outfits of Paul Poiret.

Along with other designers like Mariano Fortuny, Paul Poiret helped to establish what we now call a Classical style and of course, he is one of those designers who define Exoticism. While researching this revolutionary designer I came up with idea of three types of women he designed for: 1) Goddess-like woman in rich colored, empire waisted, beautifully draped dress; 2) Exotic, seductive, slave-like woman in turban and harem pantaloons/hobble skirt. 3) Rich, extravagant Eastern/Japanese woman in fur, velvet, etc. lush fabrics.

Gabrielle Coco Chanel (1883 – 1971) is rightfully called a queen of 20’s. She was (and still is) one of the most influential designers of all time. The style that Chanel promoted is considered classic today, not to mention timeless wardrobe essentials as little black dress or Chanel suit. Channel started off by shortening hemlines so that women who now had to work in factories (post WWI) would feel more comfortable. Using unconventional fabrics (at the time) like jersey and tweed she adapted menswear to women needs and actually transformed what a modern woman means.

Her woman was independent and strong. She lowered the waistline to upper hips level thus creating an androgynous/boyish silhouette – La Garconne. Combining elegance and practicality she used simple materials to create accessories: for the first time in history daring to mix pearls with glass beads and inventing “poor chic”. On the contrary to Poiret, Coco Chanel was an experienced seamstress and paid great attention to details. Later in her career, she stopped using sewing machines and started making every garment by hand.

She was also known for her signature embroidery which was carried out by Russian house Kitmir exclusively for her. For me, Chanel stands for timeless elegance. She is inspirational image of independence and innovation. Nowadays, Karl Lagerfeld is a head of design in house of Chanel. Here are my three favorite looks this season (from pret-a-porter A/W 2012): Madeleine Vionnet (1876 – 1975) was the first designer to adapt her “haute couture” designs to high street and by doing so she transformed commercial fashion industry. Vionnet combined modern business practices with innovation in dressmaking.

She is also praised for taking garment construction to the highest level – adopting and perfecting the bias-cut (many people say she invented the bias cut but in her biography Vionnet clearly states that is not true), making dresses with one seam and showing off outstanding cutting skills in each garment. Vionnet promoted style which I would describe as Grecian aesthetics minimized and polished to form clean, sleek, ageless idea of beauty. In 1925 British Vogue, articulating Vionnet’s appeal, declared her ‘perhaps the greatest geometrician among all French couturiers’.

Her ideas survived and are continued with great success in the house of Vionnet. http://vionnet. com Here are some of my favorite looks this season: Elsa Schiaparelli (1890–1973), Italian designer and the greatest rival of Chanel was a very influential figure in 30’s fashion. Fascinated by Surrealism, she formed one of the most iconic partnerships between Art and Fashion while working with world-renowned artist Salvador Dali. (I must mention though, that she collaborated with many other artists of the time).

Unfortunately, she didn’t adapt to changes after WWII and her business had to close in 1954. Today, her garments are kept in museums and she is praised as a genius, messiah of ultramodern couture. Few of her creations are particularly famous: Tear (1), Lobster (2) and Skeleton (3) dresses and Shoe hat (4). Claire McCardell (1905-1958) is regarded as the inventor of the “American Look”. With the rationing of silk and wool during WWII, she employed corduroy, seersucker, denim and cotton fabrics to create sensational designs. She said, that “All of us, any of us, deserves the right to a good fashion”.

Her Monastic and Popover dresses were massive hits, not to mention cloth ballet slippers which survived until today. She was the originator of mix-and-match separates, spaghetti straps, pedal-pushers, bareback summer dresses, strapless swimsuits, and feminine denim fashion. Immediately after WWII, Christian Dior (195 – 1957) jumped into a fashion arena. He launched his “New Look” in 1947 and it was an immediate success. After years of rationing Dior cut himself loose and designed dresses with full skirts (making of these required up to 50 yards of fabric), “waspie” waists and slender shoulder line.

He brought back femininity and hope for a better life. Although many people in Europe were shocked with such drastic changes, Americans gladly accepted the new breeze and much of Dior’s income in the first years came from export to USA. Unfortunately, genius died 10 years later leaving young master Yves Saint Laurent as an artistic director of his house. Today Dior house is one of the strongest leaders in fashion industry and one of my personal favorites as well. Here are my three favorite looks from A/W 2012 haute couture collection:

Yves Saint Laurent (1936 – 2008) was hailed as the man who (at the age of 22) saved the house of Dior, a King of French fashion and a first couturier to present ready-to-wear collections. I think that the most important time began when he opened his own house in 1962. He was a genius and cared about empowering women, also (much like Schiaparelli) he aimed to shock. Therefore a trouser suit – Le Smoking – was born. It was a trend setting evening trouser suit and it became Yves Saint Laurent’s trademark, also a must-have in modern women’s wardrobe.

We have to be grateful to him for blazers, see-through blouses and a business wardrobe for women. He was one of the main figures in 60’s and 70’s taking the best out of pop culture and translating it to fashion (Andy Warhol inspired dresses). He was also a great lover of art so he designed a collection of dresses inspired by his favorite painter Piet Mondrian. “Mondrian Look” (especially one particular dress) is as famous as New Look or Elsa Schiaparelli’s Tear dress. Yves Saint Laurent house continues to make androgynous women designs under leadership of newly appointed creative director Hedi Slimane.

Here are my favorite looks from Spring/Summer 2013 ready-to-wear collection: Hubert de Givenchy (1927 – today) is best known for his elegant, refined style, and for his popularity with celebrities like Audrey Hepburn (Audrey Hepburn became a symbol of house of Givenchy, she popularized him in movies like “Sabrina”, “Breakfast at Tiffany’s”, “My fair lady”, etc. ) , Jackie Kennedy, Grace Kelly and many others. Givenchy introduced a new concept of mix and match separates (unthinkable in 1950’s). His signature garments were: little black dress and “Bettina” blouse.

Creating elegance for 40 years straight, Givenchy house continues to astonish the world today with a new leader Riccardo Tisci. Here are my favorite looks from A/W 2012: Givenchy’s idol was Cristobal Balenciaga (1895 – 1972) a great Spanish couturier and colorist. He was strictly modern, very technical and a master of illusion. He invented the threequarter-length sleeve and the standaway collar. He taught fashion design classes, inspiring other designers such as Oscar de la Renta, Andre Courreges, Emanuel Ungaro, Mila Schon and Hubert de Givenchy.

He was so innovative, that he designed waistless dresses and tunics in 50’s proving to be fashion forward by almost a decade. However, in 1968 he decided to close his business. Balenciaga house was bought by Gucci group and today is run by Nicolas Ghesquiere, one of the most talented designers of today(as praised by Vogue). Here are my favorite looks of the season: Mary Quant (1934 – today) is a British designer and fashion icon which has become synonymous with the “swinging sixties” in London. She is credited with the invention of a mini skirt, skinny rib sweater and false lashes.

She reinvented the use of PVC material and created the popular “Wet Look”. She popularized hot pants and eventually received OBE and Hall of Fame awards from British Council for her outstanding contribution to fashion industry. Through 70’s and 80’s she concentrated on cosmetics industry and interior design and her clothing lines became of secondary importance. Today she has about 200 Mary Quant Colour shops in Japan where her cosmetic products remain popular. Vivienne Westwood (1941 – today) is the mother of 70’s punk era.

Together with Malcolm McLaren she established a brand that specialized in clothing with bondage pants, kilts, chains, leather jackets and T-shirts with provocative imagery. Popularized by McLaren’s managed band “Sex Pistols” the look became a new wave of fashion . It was quickly accepted amongst teenagers and young adults and I think it captured the overall atmosphere of self-expression in 70’s. Vivienne did not stop here though, she went on to receive prestigious OBE and DBE awards and opened quite a few labels under her name: Golden Label, Anglomania, Red Label and Man.

Her house successfully work today and here are my favourite looks from A/W 2012 collection: Rei Kawabuko (1942 – today) is a Japanese avant-garde designer which managed to enter the international fashion scene with an uproar. In 1983 (together with another designer Japanese designer Yohji Yamamoto) she presented a new concept in fashion – deconstructed silhouette, colourless, distressed fabrics and garments full of clothes. The look was immediately dubbed “the Hiroshima chic”, “boro look”, “beggar look” and similar.

Her distinctive point of view shocked and amused the West and that earned her a place in Parisian Chambre Syndicale du Pret-a-Porter. Today she is a head of her own company Comme des Garcons, and one of the most popular brands in the world. Here are my favourite looks from this season: Yohji Yamamoto (1943 – today) became popular at the same time as Rei Kawabuko. Presenting the unprecedented style concept to Western fashion world with his 1983 cutwork collection he was instantly acknowledged and recognized.

His asymmetrical designs always take a viewer by surprise, his commercially successful designs are sold worldwide and together with Rei Kawabuko Yohji Yamamoto is held responsible for putting Tokyo on the map fashion wise. Wonderful thing is, that despite similarities in Kawabuko’s and Yamamoto’s designs (and their life together in 80s – 90s) they both have different aesthetics and distinctive directions. Kawabuko occurs to me to be more conceptual and Yamamoto is way more elegant designer. Here are my favourite looks:

John Galliano (1960 – today) is one of the most controversial designers today but nevertheless, genius. In short, he graduated from Central Saint Martins College of Art and Design Galliano was awarded the “British Designer of the Year” in the year 1987, 1994 and 1995. Due to frequent financial troubles he accepted the job offer at Givenchy and in two years time he was transferred to Dior as a creative director of the house. He also has his own house under his name. Achieving that amount of success in a short period of time, he is proven to be genius and of course he has plenty of respectable awards to prove it.

His creations are magical, his style is very dramatic and his presentations are always theatrical . Despite his recent “crimes” (in 2011 he was dismissed from Dior when found guilty of racial insults in public) Galliano name still stands for unspeakable elegance and innovation, his garments are highly collectible. It is unclear to me what happened to genius after he was dismissed from Dior. House of Galliano is working without his original captain under leadership of Bill Gaytten. However his idea of beauty prevails and I think he is the next Chanel. Here are my favourite looks from this season:

Alexander McQueen (1969 – 2010) was a magnificent designer who left a huge imprint through his short lifetime. He won a great number of awards for his distinctive dramatic point of view, including Commander of the Order of the British Empire, International Designer of the Year 2003 by Council of fashion designers of America and others of similar caliber. Ever since he entered fashion industry he was considered a genius. Fashion editors were left in awe after each new collection, not to mention the infamous VOSS. He is well known for his collaborations with celebrities such as Lady Gaga, Bjork, Kanye West and Katy Perry.

I would say his style is eccentric, avant-garde but extremely elegant at the same time. Alexander McQueen was original in every way and extremely technical as well. After unfortunate and untimely death of genius in 2010 Sarah Burton took the helm of Alexander McQueen’s house and added her own feminine touch to the name. She has also designed a wedding dress for the Royal Wedding of Kate Middleton and Prince Williams. Alexander McQueen’s house successfully runs today and here are few wonderful creations from this year Autumn/Winter collection:

Free Essays

Design a Repeater for Digital Rf Signal

Abstract Repeaters for digital TV broadcasting can use either analogue or digital techniques. The purpose of using repeater is to boost signals into areas of weak coverage in any radio communication system. However wave interference means the repeater usually requires a frequency shift for analogue modulated signal. For digitally modulated signal it may be possible to use same frequency. This paper investigated and designed a RF repeater which will improve the inter symbol interference by incorporating delay between received and transmit signal.

This project also reviewed the basics of current Digital Video Broadcasting-Terrestrial (DVB-T) techniques and selected it as a suitable choice for lab experiment. The practical side of this project is to design and build a repeater incorporating suitable electrical delay. Contents 1. 0 Introduction4 1. 1 Background:4 1. 2Aim of this project6 1. 3Project objectives6 1. 3 Project deliverable7 2. 0 Problem analysis8 2. 1 Repeater8 2. 1. 1 Analogue repeaters9 2. 1. 2 Digital repeaters10 2. 2 Inter symbol interference13 2. 3 Multipath propagation15 2. 3. 1 Multipath fading15 2. 4 The TV channels16 2. 5 Transmission cable18 . 6 Signal Amplifiers20 2. 7 Transmission delay (Coaxial cable)21 3. 0 Possible solution24 3. 1 RF amplifier25 3. 1. 1 The Transistor Amplifier26 3. 1. 2 Ultra High Frequency Transistor Array (HFA)29 3. 1. 3 Surface mounts technology:32 3. 1. 4 Surface Mount Monolithic Amplifier:32 3. 1. 5 Loft box: 8 way home distribution unit34 3. 2. 6 Maxview signal booster35 3. 2. 7 Antenna:36 4. 0 Design37 4. 1 Circuit design37 4. 2 PCB design38 5. 0 Implementation40 5. 1 Implementation with HFA312740 5. 2 Implementation with MAV-11SM amplifier41 6. 0 Test result42 6. 1 Laboratory test result42 6. 2 Field test result44 7. Result Discussion46 8. 0 Conclusion48 Future work:49 Works Cited50 Figure List Figure 1System block diagram6 Figure 2 Passive and Active repeater block diagram7 Figure 3 Analog repeater8 Figure 4 Digital repeater9 Figure 5 Channel management for digital repeater10 Figure 6 Channel management for analogue repeater10 Figure 7 Broadcast in valley with digital repeaters11 Figure 8 101101 transmitted data12 Figure 9 Received data12 Figure 10 Transmitted data vs. Received data13 Figure 11 Multipath propagation14 Figure 12 Cable loss in dB  (Antenna basics, 2008)18 Figure 13 Linear change phase vs frequency22

Figure 14 The basic transistor amplifier26 Figure 15 HFA3127 transistor array30 Figure 16 MAV-11SM amplifier31 Figure 17 Suggested PCB layout with MAV-11SM33 Figure 18 Loft box home distributor33 Figure 19 Maxview signal booster35 Figure 20 Antenna used for this project35 Figure 21 Interference between relay signal and main transmitted signal36 Figure 22 ISIS schematic of circuit design37 Figure 23 PCB design according to the datasheet in ARES37 Figure 24 3D view for PCB38 Figure 25 Circuit with HFA3127 amplifier39 Figure 26 MAV-11SM amplifier circuit board40 Figure 27 HFA3127 gain with soldering error41

Figure 28 HFA3127 amplifier gain41 Figure 29 One MAV-11SM amplifier gain42 Figure 30 Two MAV-11SM amplifier circuits give more gain42 Figure 31 Three amplifiers together was the maximum gain43 Figure 32 Low quality picture with normal antenna43 Figure 33 Picture with repeater connected antenna44 Figure 34 Rebroadcasting connection44 1. 0 Introduction 1. 1 Background: Digital Video Broadcasting (DVB) is being adopted as the standard for digital television in many countries. The DVB standard offers many advantages over the previous analogue standards and has enabled television to make a major step forwards in terms of its technology.

Digital Video Broadcasting, DVB is now one of the success stories of modern broadcasting. The take up has been enormous and it is currently deployed in over 80 countries worldwide, including most of Europe and also within the USA. It offers advantages in terms of far greater efficiency in terms of spectrum usage and power utilisation as well as being able to affect considerably more facilities, the prospect of more channels and the ability to work alongside existing analogue services. (Pool, 2002) In these days when there are many ways in hich television can be carried from the “transmitter” to the “receiver” no one standard can be optimised for all applications. As a result there are many different forms of the Digital Video Broadcasting, DVB, standards, each designed for a given application. The main forms of DVB are summarised below: DVB Standard| Meaning| Description| DVB-C| Cable| The standard for delivery of video service via cable networks. | DVB-H| Handheld| DVB services to handheld devices, e. g. mobile phones, etc. | DVB-RSC| Return satellite channel| Satellite DVB services with a return channel for interactivity. DVB-S| Satellite services| DVB standard for delivery of television / video from a satellite. | DVB-SH| Satellite handheld| Delivery of DVB services from a satellite to handheld devices| DVB-S2| Satellite second generation| The second generation of DVB satellite broadcasting. | DVB-T| Terrestrial| The standard for Digital Terrestrial Television Broadcasting. | Digital Video Broadcasting- Terrestrial (DVB-T) : The common perception of digital television these days is of broadcasts emanating from signal towers, bouncing off satellites, and being beamed to home receivers.

This is the magic of satellite transmission, and it is reliable as long as the view of those satellites is not obscured. However, this is not the only way in which television signals are transmitted. Another popular method of transmitting signals digital video broadcasting–terrestrial (DVB-T). When broadcasters employ this method, the digital signals do not leave the earth. The signals transmitted using DVB-T do not travel via cable, though; rather, they go from antenna to aerial antenna, from signal blaster to home receiver. Digital signals are routinely transmitted using terrestrial methods.

The transmission method has different names in different parts of the world. DVB-T is the name used in Europe and Australia. North American customers receive these signals using a set of standards approved by the Advanced Television Systems Committee (ATSC). In Japan, it is known as Integrated Services Digital Broadcasting–Terrestrial (IDSB-T). DVB-T broadcasters transmit data using a compressed digital audio-video stream, with the entire process based on the MPEG-2 standard. These transmissions can include all kinds of digital broadcasting, including HDTV and other high-intensity methods.

This is a vast improvement over the old analog signals, which required separate streams of transmission. Oddly enough, some DVB-T transmissions take place over analog networks, with the antennas and receivers getting some helpful technological upgrades along the way. (Pool, 2002) 1. 2 Aim of this project The aim of this project is to investigate the design of a repeater for DVB-T system but incorporating a delay between receives and transmits signals to avoid Inter Symbol Interference (ISI). It is useful to use a repeater to boost the signal into areas of weak coverage in any radio wave communication system.

However wave interference means the repeater usually requires a frequency shift for analogue modulated signals. For digitally modulated signals it may be possible to use the same frequency. The project will review the basics of current digital systems such as DVB (Broadcast TV) and WLAN – and to identify a suitable choice for a lab experiment. The practical side will be to design and build a repeater incorporating suitable transmission delay. 1. 3 Project objectives 1. Investigate and learn Inter Symbol Interference effect on received signal. 2.

Investigate and learn the delay effect on received signal and cause of the delay. 3. Investigate and learn Multipath propagation and Doppler shift of the frequencies. 4. Investigate and learn about Digital Video Broadcasting (DVB) techniques. 5. Investigate and learn about transmission delay of coaxial cable. 6. Investigate and learn about different type of Amplifier. 7. Designing repeater circuit. 8. Implementing circuit. 9. Testing the circuit. Figure 1System block diagram 1. 3 Project deliverable * System design * Circuit design * Documentation 2. 0 Problem analysis . 1 Repeater Repeaters provide an efficient solution to increase the coverage of the broadcasting networks. In the broadcasting networks, the network operators usually first put high power transmitters at the strategic points to quickly ensure an attractive coverage and then, in a second step, increase their coverage by placing low-power repeaters in the dead spot or shadow areas, such as a tunnel, valley or an indoor area. A repeater is simply a device that receives an analogue signal or a digital signal and regenerates the signal along the next leg of the medium.

In DVB-T networks, there are two different kinds of repeaters. They are passive repeaters, which are also called as gap-fillers and active repeaters that are also called as regenerative repeaters. A passive repeater receives and retransmits a DVB-T signal without changing the signalling information bits. The signal is only boosted. An active repeater can demodulate the incoming signal, perform error recovery and then re-modulates the bit stream. The output of the error recovery can even be connected to a local re-multiplexer to enable insertion of local programmes.

This means that the entire signal is regenerated. The building blocks of the passive and active repeater configurations are shown in Figure 1. Figure 2 Passive and Active repeater block diagram In a first step, DVB-T broadcasters, as all broadcasters, launch their networks with high power transmitters in strategic point in order to quickly insure an attractive coverage to TV operators and then, in a second step, increase their coverage by placing low power repeaters in shadow area. To repeat a DVB-T signal, two solutions can be used: An analogue repetition: in this case, repeaters use well-known techniques such as down conversion, filtering, up conversion and amplification. The signal is only boosted. * A digital repetition: this new type of repeater uses a professional DVB-T receiver to recover the programme stream (and correct all errors) carried in the RF channel, performs a new modulation followed by an up conversion and amplification. It means that the entire signal is regenerated. 2. 1. 1 Analogue repeaters In case of analogue repetition, the output signal quality cannot exceed the quality of the received signal because the signal is not regenerated.

Figure 3 Analog repeater Furthermore, being a passive process, it degrades the signal; the phase noise of the local oscillator involves a degradation of the phase noise of the received signal and creates an inter-modulation. The local oscillator phase noise adds to the phase noise of the received signal. In these conditions, what are the performances of analogue repetition for Modulation Error Ratio (MER) and Carrier to Noise ratio (C/N)? Of course, performances are linked to the technology but analogue repetition cannot be insured ad infinitum. And, if one link in the analogue repetition chain is weak, all the system is deficient. Trolet, 2002) 2. 1. 2 Digital repeaters In case of a digital repetition, the entire signal is regenerated; it means that repeaters, as transmitters, insure the quality of the broadcasted signal as long as it is able to demodulate it. Figure 4 Digital repeater The output signal quality is independent of the input signal quality: * Phase noise is linked to the local oscillator only, * A weak link, in a digital repetition chain, is erased by the following repeater, * Several digital repeaters can be cascaded without any cumulative degradation.

Drawback of Digital repeater The delay inside a digital repeater is taller than the guard interval. So, the signal cannot be repeated on the frequency of the main transmitter: main transmitters and repeaters cannot operate in a Single Frequency Network (SFN) even with 8K carriers and a guard interval of 1/4. (Trolet, 2002) Figure 5 Channel management for digital repeater The delay inside an analogue repeater is lower than the guard interval and allows main transmitters and repeaters to operate in SFN mode. Figure 6 Channel management for analogue repeater

But, with such technique, overlap between repeater cells and transmitter cell cannot be optimised/adjusted. Analogue repeaters have not the possibility to buffer the signal; they cannot add delay to move the overlap zone. To optimise single frequency network with this technique, two solutions: * Move the repeater that means you have to find new broadcasting site. * Reduce the output power of your repeaters and forbid overlap. So, to build an efficient Single Frequency Network (SFN), Broadcasters have benefits in using transmitters: * Means more freedom for defining the size of the cells Means more freedom for defining the repeater locations Benefits of Digital Repeater * As long as the repeater is able to demodulate the RF channels, signal quality is independent of input signal quality. * Output MER > 33 dB (Trolet, 2002) * In theory, thanks to the forward error correction (FEC) and the output signal quality, digital repeaters can be cascaded ad infinitum. It is an efficient solution to broadcast in valleys. TV viewers and distant repeaters share the broadcasted signal. Figure 7 Broadcast in valley with digital repeaters The demodulation process, down to the programme stream, allows broadcasters to insert a local multiplexor in order to customize the content for a local broadcasting. More and more, local communities claim their local programmes. Digital repeaters offer a flexible solution to the network. * Shadow area can be covered by several repeaters. Repeaters operate together in SFN mode without any external references (10 MHz and 1 PPS) (Trolet, 2002). In their internal memory, digital repeaters can buffer the signal so as to optimise overlaps. 2. 2 Inter symbol interference

Inter-symbol interference (ISI) is an unavoidable consequence of both wired and wireless communication systems. Morse first noticed it on the transatlantic telegraph cables transmitting messages using dots and dashes and it has not gone way since. He handled it by just slowing down the transmission. Amplitude Time Figure 8 101101 transmitted data Figure 8 shows a data sequence, 1,0,1,1,0, which wish to be sent. This sequence is in form of square pulses. Square pulses are nice as an abstraction but in practice they are hard to create and also require far too much bandwidth. Amplitude Time

Figure 9 Received data Figure 9 shows each symbol as it is received. It also shows what the transmission medium creates a tail of energy that lasts much longer than intended. The energy from symbols 1and 2 goes all the way into symbol 3. Each symbol interferes with one or more of the subsequent symbols. The circled areas show areas of large interference. Amplitude Time Figure 10 Transmitted data vs. Received data Fig. 3 shows the actual signal seen by the receiver. It is the sum of all these distorted symbols. Compared to the transmitted signal, the received signal looks quite indistinct.

The receiver does not actually this signal; it sees only the little dots, the value of the amplitude at the timing instant. Symbol 3, this value is approximately half of the transmitted value, which makes this particular symbol is more susceptible to noise and incorrect interpretation and this phenomena is the result of this symbol delay and smearing. This spreading and smearing of symbols such that the energy from one symbol effects the next ones in such a way that the received signal has a higher probability of being interpreted incorrectly is called Inter Symbol Interference or ISI.

ISI can be caused by many different reasons. It can be caused by filtering effects from hardware or frequency selective fading, from non-linearity and from charging effects. Very few systems are immune from it and it is nearly always present in wireless communications. Communication system designs for both wired and wireless nearly always need to incorporate some way of controlling it. The main problem is that energy, which is been wishing to confine to one symbol, leaks into others. So one of the simplest things can be done to reduce ISI is to just slowing down the signal.

Transmitting the next pulse of information only after allowing the received signal has damped down. The time it takes for the signal to die down is called delay spread, whereas the original time of the pulse is called the symbol time. If delay spread is less than or equal to the symbol time then no ISI will result, otherwise yes. (Charan, 2002) Slowing down the bit rate was the main way ISI was controlled on those initial transmission lines. Then faster chips came and allowed to do signal processing controlling ISI and transmission speeds increased accordingly. . 3 Multipath propagation Multipath propagation is caused by multipath receptions of the same signal. in city environment or indoors signal travels along different path from transmitter (Tx) to receiver (Rx). * Signal components received at slightly different times (delay) * These components are combined at Rx * Results as a signal that varies widely in amplitude, phase or polarization 2. 3. 1 Multipath fading When the components add destructively due to phase differences amplitude of the received signal is very small.

At the other times the components add constructively the amplitude of received signal is large. This amplitude variations in the received signal called signal fading, are due to the time-variant characteristics of the channel. Relative motion between Tx and Rx (or surrounding objects causing e. g. reflection) causes random frequency modulation. Figure 11 Multipath propagation Each multipath component has different Doppler shift. The Doppler shift can be calculated by using: fd=V? cos? V is the velocity of the terminal ? is the spatial angle between the direction of motion and the wave ? is the wavelength

The three most important effects of multipath fading and moving scatters are * Rapid changes in signal strength over a small travelled distance or time interval * Random frequency modulation due to varying Doppler shifts on different multipath signals. * Time dispersion (echoes) caused by multipath propagation 2. 4 The TV channels Hertz (Hz) means cycles per second. (Heinrich Hertz was the first to build a radio transmitter and receiver while understanding what he was doing. )  KHz means 1000 Hertz, MHz means 1,000,000 Hertz, and GHz means 1,000,000,000 Hertz The radio frequency spectrum is divided into major bands:

Frequency                Wavelength (in meters) VLF     very low frequency               3 KHz – 30 KHz               100 Km – 10 Km LF       low frequency            30 KHz – 300 KHz           10 Km – 1 Km MF      medium frequency                 300 KHz – 3 MHz            1 Km – 100 m HF       high frequency 3 MHz – 30 MHz              100 m – 10 m VHF    very high frequency              30 MHz – 300 MHz        10 m – 1 m UHF    ultra high frequency              300 MHz – 3 GHz            1 m – 100 mm SHF    super high frequency             3 GHz – 30 GHz               100 mm – 10 mm EHF    extremely high frequency     30 GHz – 300 GHz          10 mm – 1 mm (Antenna basics, 2008) The UK uses UHF for terrestrial television transmissions, with both PAL-I analogue broadcasts and DVB-T digital broadcasts sharing the band. The following table is a handy channel/frequency conversion table showing the E channel number, PAL-I vision and sound carrier frequencies, and the centre frequency for digital tuning. The frequency plan for the UK involves each channel having an 8MHz bandwidth – the space in the spectrum that each channel is allotted. The PAL-I standard specifies a video bandwidth of 5. 0 MHz and an audio carrier at 6 MHz.

The DVB-T transmissions must fall within this channel plan, resulting in each digital channel also having a bandwidth of 8 MHz. Unlike PAL-I, the digital channel (carrying a multiplexed signal) utilises the entire bandwidth available to it simultaneously, transmitting 2048 carriers (in “2k mode”). For tuning purposes, a centre frequency is used (Table is included in appendices). (digital spy, 2009) Decibels Decibels (dB) are commonly used to describe gain or loss in circuits. The number of decibels is found from: Gain in dB = 10*log(gain factor)       or (Antenna basics, 2008) In some situations this is more complicated than using gain or loss factors. But in many situations, decibels are simpler.

For example, suppose 10 feet of cable loses 1 dB of signal. To figure the loss in a longer cable, just add 1 dB for every 10 feet. In general, decibels let add or subtract instead of multiply or divide. Noise Whether a signal is receivable is determined by the signal to noise ratio (S/N). For TVs there are two main sources of noise: 1. Atmosphere noise. There are many types of sources for this noise. A light switch creates a radio wave every time it opens or closes. Motors in some appliances produce nasty RF (radio frequency) noise. 2. Receiver noise. Most of this noise comes from the first transistor the antenna is attached to. Some receivers are quieter than others. 2. Transmission cable Twin lead (ribbon cable) used to be common for TV antennas. It has its advantages. But due to its unpredictability when positioned near metal or dielectric objects, it has fallen out of favour. Coaxial cable is recommended. It is fully shielded and not affected by nearby objects. Transmission cable has a feature called its characteristic impedance, which for TV coax should always be 75 ohms. Although rated in ohms, this has nothing to do with resistance. A resistor converts electric energy into heat. The “75 ohms” of a coaxial cable does not cause heat. Where it comes from is mathematically complicated and beyond our scope here.

But coax also has ordinary resistance (mostly in the center conductor) and thus loses some of the signal, converting it into heat. The amount of this dissipation (loss) depends on the frequency as well as the cable length. Type:        Centre conductor:    Cable diameter:     RG-59      20-23 gauge             0. 242 inches     RG-6         18 gauge                   0. 265 inches     RG-11      14 gauge                   0. 405 inches Figure 12 Cable loss in dB  (Antenna basics, 2008) The above chart is only approximate. There are many cable manufacturers for each type and there is no enforcement of standards. If the mast-mounted amplifier gain exceeds the cable loss then it shouldn’t matter what cable you use.

But there are two problems with this: * Some cable has incomplete shielding. This is most common for RG-59, another reason to avoid it. * When the cable run is longer than 200 feet, the low-numbered channels can become too strong relative to the high-numbered channels. In this case, RG-11 or an ultra-low-loss RG-6 is recommended. (These alternatives are expensive. )  Alternatively, frequency compensated amplifiers will work. 2. 6 Signal Amplifiers There are two types of signal amplifiers: Preamplifiers  (Mast-mounted amplifiers)  –  These should be mounted as close to the antenna as possible. Usually the amplifier comes in two parts: 1. The amplifier.

This is an outdoor unit that is normally bolted to the antenna mast. It must have a very low noise figure, and enough gain to overcome the cable loss and the receiver’s noise figure. 2. The power module (power injector). This is an indoor unit that commonly lies on the floor behind the TV. It is inserted into the antenna cable between the amplifier and the TV. This module injects some power, usually DC, into the coaxial cable where the amplifier can use it. The power injector is the amplifier’s power supply. Distribution amplifiers  –  These are simple signal boosters. They are often necessary when an antenna drives multiple TVs or when the antenna cable is longer than 150 feet.

Distribution amplifiers don’t need to have a low noise figure, but they need to be able to handle large signals without overloading. Commonly, distribution amplifiers have multiple outputs. (Unused outputs usually do not need to be terminated. ) Never feed an amplifier output directly into another amplifier. There should always be a long cable between the preamplifier and the distribution amplifier. Placing the two amplifiers close together can cause overload and/or oscillation. A mast-mounted amplifier’s most important characteristic is its noise level, usually specified by the noise figure. But many manufacturers don’t take this number seriously. If it is given at all, it is often wrong. If all makers don’t do them right then comparison-shopping is not possible.

The author is inclined to rate amplifiers for their noise figures as follows:       0. 5 dB superb (anything better runs into thermal atmospheric noise)       2. 0 dB excellent 4. 0 dB fair 6. 0 dB poor 10 dB awful 2. 7 Transmission delay (Coaxial cable) Transmission lines are described by their two most important characteristics: the characteristic impedance Zo and the delay. For instance, a “short” (say 0. 01 wavelength) piece of coaxial cable such RG-58U has been taken and measured its capacitance with the other end open. A one foot length yields more or less 31. 2 pF. The inductance also has been measured with the other end shorted. It yields 76. 8 nH. The impedance may now be computed as: Zo=LC Zo=76. ? 10-931. 2? 10-12=49. 6 ohms Here L and C are measured for the same length. The delay may also be computed: Delay= L? C Delay= 76. 8? 10-9? 31. 2? 10-12=1. 55 nSec For an ideal line, the delay increases linearly with its length, while its impedance remains constant. After that it has been computed the velocity in foot per second: V=lendelay V=11. 55? 10-9=6. 46? 108 foot per second or meters/second 8 10*966. 1 This is less than the speed of light. The ratio of the above speed to the speed of light gives the velocity factor Vf: Vf=1. 966? 1082. 998? 108=0. 666 or 66. % of the speed of light As mentioned earlier, the delay increases linearly with the line length. For a given length, the phase difference between the input and output will increase with the frequency: ? =2? f? delay Here the phase ? is in radians and the frequency f is Hertz. Converting the phase from radians to degrees requires multiplying by: 3602? In this case if frequency is 900 MHz so phase delay will be ?deg=f? 360? delay=900? 106? 360? 1. 55? 10-9? 502. 2 This length that gives 90 degrees of phase shift is also known as a quarter wavelength. Figure 13 Linear change phase vs frequency Figure-13 An ideal transmission line gives a linear change of phase versus frequency.

The distributed inductance and capacitance are the basic transmission line parameters. From these, it can be calculated the line impedance, the delay in terms of time and phase, the speed of propagation and the velocity factor. The inductive component has an additional component at the lower frequencies which slows the signal somewhat. This occurs around 100 KHz for small coax and lower for larger cables. For frequencies above 1 MHz, the dielectric constant of the cable is probably responsible for the decrease in the delay. Measuring the delay of cables can reveal some “hidden” properties that could make it unsuitable for some applications, such as carrying wideband data. (Audet, 2001) 3. 0 Possible solution

The main component of a repeater is amplifier. There are many types of amplifier can be used for this job. RF amplifiers are electronic devices that accept a varying input signal and produce an output signal that varies in the same way as the input, but that has larger amplitude. RF amplifiers generate a completely new output signal based on the input, which may be voltage, current, or another type of signal. Usually, the input and output signals are of the same type; however, separate circuits are used. The input circuit applies varying resistance to an output circuit generated by the power supply, which smoothes the current to generate an even, uninterrupted signal.

Depending on load of the output circuit, one or more RF pre-amplifiers may boost the signal and send the stronger output to a RF power amplifier (PA). Other types of RF amplifiers include low noise, pulse, bi-directional, multi-carrier, buffer, and limiting amplifiers. Detector log video amplifiers (DLVAs) are used to amplify or measure signals with a wide dynamic range and wide broadband. Successive detection log video amplifiers (SDLVAs) are log amplifiers that can operate over a wider dynamic range than DLVAs, while extended range detector log video amplifiers (ERDLVAs) are DLVAs that can operate with a wider operating frequency. (Global Spec, 2008) * Military / Defense * Mobile / Wireless Systems * Plasma / Electron Laser * RF Induction Heating * Radar Systems

Amplifier Type: Applications: * Low Noise Amplifier * Power Amplifier * Bi-directional Amplifier * Multi-carrier Amplifier * Multiplier (RF amplifier, 2008) 3. 1 RF amplifier Selecting RF amplifiers requires an analysis of several performance specifications. Operating frequency is the frequency range for which RF amplifiers meet all guaranteed specifications. Design gain, the ratio of the output to the input power, is normally expressed in decibels (dB), or Gdb = 10 * log (Po/Pi) Output power is the signal power at the output of the amplifier under specified conditions such as temperature, load, voltage standing wave ratio (VSWR), and supply voltage.

Gain flatness indicates the degree of the gain variation over its range of operating wavelengths. Secondary performance specifications to consider include noise figure (NF), input VSWR, output VSWR, and monolithic microwave integrated circuit (MMIC) technology. The noise figure, a measure of the amount of noise added to the signal during normal operation, is the ratio of the signal-to-noise ratio at the input of the component and the signal-to-noise ratio measured at the output. The NF value sets the lower limit of the dynamic range of the amplifier. Input VSWR and output VSWR are unit-less ratios ranging from 1 to infinity that express the amount of reflected energy. Global Spec, 2008) There are several physical and electrical specifications to consider when selecting RF amplifiers. Physical specifications include package type and connector type. Package types include surface mount technology (SMT), flat pack, and through hole technology (THT). RF amplifiers may also be connector zed or use waveguide assemblies. Connector types include BNC, MCX, Mini UHF, MMCX, SMA, SMB, SMP, TNC, Type F, Type N, UHF, 1. 6 / 5. 6, and 7/16. Important electrical characteristics include nominal operating voltage and nominal impedance. Operating temperature is an important environmental parameter to consider. (Global Spec, 2008) 3. 1. 1 The Transistor Amplifier

In the preceding section explains the internal workings of the transistor and will introduce new terms, such as emitter, base, and collector. Here it discusses the overall operation of transistor amplifier. To understand the overall operation of the transistor amplifier, it must have to only consider the current in and out of the transistor and through the various components in the circuit. Therefore, from this point on, only the schematic symbol for the transistor will be used in the illustrations, and rather than thinking about majority and minority carriers that mean it will be only emitter, base and collector current. Before going into the basic transistor amplifier, there are two terms it should be familiar with: AMPLIFICATION and AMPLIFIER.

Amplification is the process of increasing the strength of a SIGNAL. A signal is just a general term used to refer to any particular current, voltage, or power in a circuit. An amplifier is the device that provides amplification (the increase in current, voltage, or power of a signal) without appreciably altering the original signal. Transistors are frequently used as amplifiers. Some transistor circuits are CURRENT amplifiers, with a small load resistance; other circuits are designed for VOLTAGE amplification and have a high load resistance; others amplify POWER. By inserting one or more resistors in a circuit, different methods of biasing may be achieved and the emitter-base battery eliminated.

In addition to eliminating the battery, some of these biasing methods compensate for slight variations in transistor characteristics and changes in transistor conduction resulting from temperature irregularities. Notice in figure 2-12 that the emitter-base battery has been eliminated and the bias resistor RB has been inserted between the collector and the base. Resistor RB provides the necessary forward bias for the emitter-base junction. Current flows in the emitter-base bias circuit from ground to the emitter, out the base lead, and through RB to VCC. Since the current in the base circuit is very small (a few hundred microamperes) and the forward resistance of the transistor is low, only a few tenths of a volt of positive bias will be felt on the base of the transistor.

However, this is enough voltage on the base, along with ground on the emitter and the large positive voltage on the collector, to properly bias the transistor. (Intregrated Publishing, 2002) Figure 14 The basic transistor amplifier With Q1 properly biased, direct current flows continuously, with or without an input signal, throughout the entire circuit. The direct current flowing through the circuit develops more than just base bias; it also develops the collector voltage (VC) as it flows through Q1 and RL. Notice the collector voltage on the output graph. Since it is present in the circuit without an input signal, the output signal starts at the VC level and either increases or decreases.

These dc voltages and currents that exist in the circuit before the application of a signal are known as quiescent voltages and currents (the quiescent state of the circuit). Resistor RL, the collector load resistor, is placed in the circuit to keep the full effect of the collector supply voltage off the collector. This permits the collector voltage (VC) to change with an input signal, which in turn allows the transistor to amplify voltage. Without RL in the circuit, the voltage on the collector would always be equal to VCC. The coupling capacitor (CC) is another new addition to the transistor circuit. It is used to pass the ac input signal and block the dc voltage from the preceding circuit. This prevents dc in the circuitry on the left of the coupling capacitor from affecting the bias on Q1.

The coupling capacitor also blocks the bias of Q1 from reaching the input signal source. The input to the amplifier is a sine wave that varies a few millivolts above and below zero. It is introduced into the circuit by the coupling capacitor and is applied between the base and emitter. As the input signal goes positive, the voltage across the emitter-base junction becomes more positive. This in effect increases forward bias, which causes base current to increase at the same rate as that of the input sine wave. Emitter and collector currents also increase but much more than the base current. With an increase in collector current, more voltage is developed across R L.

Since the voltage across RL and the voltage across Q1 (collector to emitter) must add up to VCC, an increase in voltage across RL results in an equal decrease in voltage across Q1. Therefore, the output voltage from the amplifier, taken at the collector of Q1 with respect to the emitter, is a negative alternation of voltage that is larger than the input, but has the same sine wave characteristics. During the negative alternation of the input, the input signal opposes the forward bias. This action decreases base current, which results in a decrease in both emitter and collector currents. The decrease in current through RL decreases its voltage drop and causes the voltage across the transistor to rise along with the output voltage.

Therefore, the output for the negative alternation of the input is a positive alternation of voltage that is larger than the input but has the same sine wave characteristics. By examining both input and output signals for one complete alternation of the input, we can see that the output of the amplifier is an exact reproduction of the input except for the reversal in polarity and the increased amplitude (a few millivolts as compared to a few volts). The PNP version of this amplifier is shown in the upper part of the figure. The primary difference between the NPN and PNP amplifier is the polarity of the source voltage. With a negative VCC, the PNP base voltage is slightly negative with respect to ground, which provides the necessary forward bias condition between the emitter and base.

When the PNP input signal goes positive, it opposes the forward bias of the transistor. This action cancels some of the negative voltage across the emitter-base junction, which reduces the current through the transistor. Therefore, the voltage across the load resistor decreases, and the voltage across the transistor increases. Since VCC is negative, the voltage on the collector (VC) goes in a negative direction (as shown on the output graph) toward -VCC (for example, from -5 volts to -7 volts). Thus, the output is a negative alternation of voltage that varies at the same rate as the sine wave input, but it is opposite in polarity and has a much larger amplitude.

During the negative alternation of the input signal, the transistor current increases because the input voltage aids the forward bias. Therefore, the voltage across RL increases, and consequently, the voltage across the transistor decreases or goes in a positive direction (for example: from -5 volts to -3 volts). This action results in a positive output voltage, which has the same characteristics as the input except that it has been amplified and the polarity is reversed. (Intregrated Publishing, 2002) 3. 1. 2 Ultra High Frequency Transistor Array (HFA) The HFA3046, HFA3096, HFA3127 and the HFA3128 are Ultra High Frequency Transistor Arrays that are fabricated from Intersil Corporation’s complementary bipolar UHF-1 process.

Each array consists of five dielectrically isolated transistors on a common monolithic substrate. The NPN transistors exhibit a fT of 8GHz while the PNP transistors provide a fT of 5. 5GHz. Both types exhibit low noise (3. 5dB), making them ideal for high frequency amplifier and mixer applications. (HFA3127, 2003) The HFA3046 and HFA3127 are all NPN arrays while the HFA3128 has all PNP transistors. The HFA3096 is an NPN-PNP combination. Access is provided to each of the terminals for the individual transistors for maximum application flexibility. Monolithic construction of these transistor arrays provides close electrical and thermal matching of the five transistors. Features * NPN Transistor (fT) . . . . . . . . . . . . . . . . . . . . . . . . 8GHz * NPN Current Gain (hFE). . . . . . . . . . . . . . . . . . . . . . . . 130 * NPN Early Voltage (VA) . . . . . . . . . . . . . . . . . . . . . . . 50V * PNP Transistor (fT). . . . . . . . . . . . . . . . . . . . . . . . . 5. 5GHz * PNP Current Gain (hFE). . . . . . . . . . . . . . . . . . . . . . . . . 60 * PNP Early Voltage (VA) . . . . . . . . . . . . . . . . . . . . . . . .20V * Noise Figure (50? ) at 1. 0GHz . . . . . . . . . . . . . . . . . 3. 5dB * Collector to Collector Leakage . . . . . . . . . . . . . . . . . .<1pA * Complete Isolation Between Transistors Pin Compatible with Industry Standard 3XXX Series Arrays * Pb-Free Plus Anneal Available (RoHS Compliant) Applications * VHF/UHF Amplifiers * VHF/UHF Mixers * IF Converters * Synchronous Detectors Specifications: * Collector Emitter Voltage V(br)ceo: 8 V * Current Ic Continuous a Max: 11. 3 mA * DC Collector Current: 37 mA * DC Current Gain: 130 * Gain Bandwidth ft Typ: 8 GHz * Module Configuration: Five * Mounting Type: SMD * Number of Pins: 16 * Number of Transistors: 5 * Package / Case: SOIC * Power Dissipation Pd: 150 mW * SVHC: No SVHC (15-Dec-2010) * Supply Voltage Min: 12 V * Transistor Case Style: SOIC * Transistor Polarity: NPN * RoHS: Yes (Datasheet, 2005) Figure 15 HFA3127 transistor array

As this project is to design and build a repeater incorporating transmission delay, so any of those or both amplifiers can be used to convert weak high frequency signal to strong signal. 3. 1. 3 Surface mounts technology: Surface mount technology (SMT) adds components to a printed circuit board (PCB) by soldering component leads or terminals to the top surface of the board. SMT components have a flat surface that is soldered to a flat pad on the face of the PCB. Typically, the PCB pad is coated with a paste-like formulation of solder and flux. With careful placement, SMT components on solder paste remain in position until elevated temperatures, usually from an infrared oven, melt the paste and solder the component leads to the PCB pads.

Industry-standard pick-and-place equipment can mount SMT components quickly, accurately, and cost-effectively. SMT is a widely used alternative to mounting processes that insert pins or terminals through holes and solder leads into place on the opposite side of the board. 3. 1. 4 Surface Mount Monolithic Amplifier: Figure 16 MAV-11SM amplifier Features: * Wideband, 0. 05 to 1GHz * High output power, up to +17. 5 dBm typ. * Low noise, 3. 6 dB typ. * Aqueous washable * Applications: * UHF – TV * Cellular * Defence communication * UHF/VHF receivers/transmitters (Monolithic Amplifier, 2002) General description: MAV-11SM+ is a wideband amplifier offer a high dynamic range. It has repeatable performance from lot to lot.

It is enclosed in a plastic molded package. MAV-11SM+ uses Darlington configuration and is fabricated using silicon technology. Expected MTBF is 500 years at 85°C case temperature. Functions| Pin number| Description| RF in| 1| RF input pin. This pin requires the use of an external DC blocking capacitor chosen for the frequency of operation. | RF-out and DC-in| 3| RF output and bias pin. DC voltage is present on this pin; therefore a DC blocking capacitor is necessary for proper operation. An RF choke is needed to feed DC bias without loss of RF signal due to the bias connection, as shown in “Recommended Application Circuit”. | GND| 2,4| Connections to ground.

Use via holes as shown in “Suggested Layout for PCB Design” to reduce ground path inductance for best performance. | (Monolithic Amplifier, 2002) Figure 17 Suggested PCB layout with MAV-11SM 3. 1. 5 Loft box: 8 way home distribution unit * Fully Compatible with the Sky Digital tvLINK System. * Combines Satellite, TV, FM, DAB, & CCTV on to one down cable to the living room. * Typically 8dB Gain to each output. * TV, FM Digi Channel, VCR, DAB, & CCTV available at each output. * Built in switch mode power supply with LED power on indicator. * The Global LoftBox is an integrated Home Distribution system. Figure 18 Loft box home distributor

Normally located in the loft, it combines TV, FM, DAB, CCTV & Satellite on to one down cable, feeding to a Global triplexing wall plate or MSWP in the living room. The Loft Box takes a return feed from the living room which would typically be from the UHF2 output from the Sky digibox or from a “Y” splitter. FM & DAB are diplexed onto the return feed & then distributed to additional points within the house via Global TV/FM diplex wall plates. Each outlet point is able to receive normal terrestrial TV, FM, DAB, CCTV & the selected Satellite channel. The LoftBox fully supports the infrared control signals from the tvLINK remote eye back to the Sky Set-top Box. But for connection problem this could not be used in this project. 3. 1. 6 Maxview signal booster

It boosts digital and analogue TV, FM/DAB radio signals in weaker signal areas. This booster was bought for comparing the signal strength with amplifier built in this project. Maxview signal booster is high gain TV signal booster. Key features are; Forward gain typically per outlet: 18dB Switched gain: 6dB Noise figure typically: 4. 5dB Forward frequency coverage: 40-860MHz Reverse frequency coverage: 5-65MHz Figure 19 Maxview signal booster 3. 1. 7 Antenna: Normal TV aerial can be used to receive and transmit signal. For this project Truvision Indoor UHF TV aerial has been selected for receiving and transmitting TV signal. Figure 20 Antenna used for this project

The Truvision UHF TV aerial has a striking contemporary free-standing design which simply flips up into position and is ready to use straight out of the box. Easy fingertip adjustment allows horizontal or vertical alignment for optimum signal reception. 4. 0 Design Receiving antenna receives the DVB-T signal and gives it to the repeater. Repeater amplifies the received signal and retransmits the signal through transmission line (coaxial cable). Coaxial cable has been used for incorporating transmission delay to minimize inter symbol interference. Figure 21 Interference between relay signal and main transmitted signal Although this paper talked about strong signal reception by TV antenna but there will be some interference with the transmitted signal from main transmitter.

For sjort of time this project could not go through that problem. So this project is now to design the repeater, build the circuit and testing that in laboratory environment and outdoor environment. 4. 1 Circuit design The ISIS schematic drawing software is an extremely versatile application for circuit design. However it naturally takes some time to learn all of its capabilities. For this project HFA3127 transistor array was selected because of its low noise high gain capability. But there was no transistor family named HFA3127 in ISIS software. Then one new transistor family was created to draw the circuit. Figure 22 ISIS schematic of circuit design 4. 2 PCB design

Circuit design has been designed in ISIS but PCB layout was not acceptable. It was suggested to design the PCB layout in ARES according to the datasheet. After that, PCB layout was made to ARES. Figure 23 PCB design according to the datasheet in ARES In PCB design there was some contact errors which could not been removed. Whenever it was trying to remove the errors it was saying not connected. As the legs of ICs and other components gap was so small, it was showing that errors. Back side of PCB was grounded plain because this circuit was for RF signal. In PCB design micro strip line has been used for ultra high frequency or very high frequency.

Pin no 14 and 15 is connected with RF input socket and pin no 1 and 2 is connected with RF output socket through micro strip. Down side micro strip is for input voltage. Figure 24 3D view for PCB 5. 0 Implementation Required components: * PCB board * Resistors * Capacitors * Amplifier * Two antennas * TV card * Transmission line (Coaxial cable) * SMA connectors 5. 1 Implementation with HFA3127 As it was the circuit of surface mount components, it was really difficult to solder by hand. Components were 0603 package so it was very small. Even it is mentioned earlier in ISIS there was no package for HFA3127 so it had to make one package for this device. The dimensions of ICs legs were wrong so the IC was not fitted with PCB board.

One side of ICs legs were fitted and other side’s legs were connected with small wire. Figure 25 Circuit with HFA3127 amplifier 5. 2 Implementation with MAV-11SM amplifier This amplifier has been designed with MAV-11SM amplifier. This picture shows two amplifiers have been used for this circuit but actually two amplifier circuit has been joined together to get more gain. This is also surface mount circuit board. This amplifier’s gain is 10dB each. Figure 26 MAV-11SM amplifier circuit board As this paper was expecting an amplifier with more than 30dB reverse gain so HFA3127 has been also connected with these two. 6. 0 Test result 6. 1 Laboratory test result Amplifier circuit with HFA3127:

Figure 27 HFA3127 gain with soldering error Figure 28 HFA3127 amplifier gain Amplifier circuit with MAV-11SM: Figure 29 One MAV-11SM amplifier gain Figure 30 Two MAV-11SM amplifier circuits give more gain Figure 31 Three amplifiers together was the maximum gain 6. 2 Field test result Figure 32 Low quality picture with normal antenna Figure 33 Picture with repeater connected antenna Figure 34 Rebroadcasting connection 7. 0 Result Discussion Laboratory test: Results with transistor array: Figure-27 shows the return loss is -6. 2573 dB at input power -20dB in the place of 15-20 dB. From this result it was understood that something problem with soldering.

After examining the circuit soldering, it was found that at pin 4 the voltage is 0V instead of 0. 7-0. 8V. Then it was soldered again and checked it. This time it was 0. 8V but still it was not ok. After that from supervisor suggestion, transistor was changed to amplify the signal. This time the gain was high 25 dB but it was rolling over. Figure-28 shows initial gain was 25 dB but it was rolling over. The average gain was nearly 10 dB. According to data sheet this gain should be 15-20 dB. Probably if the soldering would very good, it would be possible to get good result with this amplifier. Even if it is possible to use all five transistors at a time then it is very possible to get 120 dB gains which are really incredible.

Results with MAV-11SM amplifier: Figure-29 shows MAV-11SM amplifier circuit was working expectedly. With one amplifier its gain should be around 10 dB. It was showing 8. 94 dB. Figure-30 shows when two MAV-11SMs was connected together the gain was increased to 15. 93dB at -20 dBm power. Now it was connected three amplifiers together and the gain was burst! Figure-31 shows the gain was 31. 276 dB for which the project was waiting. Field Test Now time to outdoor test, for outdoor test a TV card/TV, three antennas, and coaxial cable were needed with repeater. First, TV card was installed in the computer and connected with antenna without amplifier.

After scanning the channel, only 6 free channels were found and most of their picture qualities were very low (figure-32). After that the repeater was connected with antennas and directly connected with TV card. Now it got 67 free channels from scanning and the picture quality was very high (figure-33). When voltage was keeping 0V from power supply connected with repeater, the picture was becoming worse and when voltage was going to be high the picture quality was going to high. Now the final stage of testing, because until now it can be said that repeater is working perfect. But the purpose the repeater has been built is to rebroadcast the signal.

For rebroadcasting the signal input side of the repeater was connected with one receiver antenna and output side was connected with transmitter antenna with long transmission line (figure-34) which would incorporate delay to minimize inter symbol interference. But it was connected for 24 hours and TV was on with normal antenna but no improvement was made by rebroadcasting issue. End of this experiment it was found that probably the transmit power was too low to retransmit the signal because power supply was 5V 1A input or the antenna could not transmit the signal around the room or antenna transmitted too low signal that the signal was not good enough to capture good quality video. Two possible reasons for failure: * Too low power for retransmit * Transmit antenna

Time was a big factor for test. According to Gantt chart, only 20 days have been used for testing because of all other assignment. This delay was coming from previous task as well. Design and implementation took a long time. When testing has been started, then there was not extra time to resolve any problem identified. 8. 0 Conclusion The aim of this project was to design a digital repeater incorporating transmission delay (coaxial cable) to minimize inter symbol interference. The project main part was to design an amplifier circuit, build the circuit and test the circuit. If everything goes right then it can be tested by rebroadcasting DVB-T signal checked in TV.

Though this project has not reached its final target, still this project is a complete concept of amplify signal theory. At the very beginning this project was expecting to design a digital repeater which will minimize inter symbol interference incorporating transmission delay. At first, HFA transistor array has been selected for designing amplifier where the circuit was built by surface mount components. For soldering problem the gain was -5dB. After that only one transistor has been used from five transistors of HFA3127 by supervisor’s suggestion to get good performance. Though initially that design gave 25 dB gains but it was rolling over. Still average gain of HFA3127 was 10 dB.

As for high frequency amplifying and transmission needs a very high gain amplifier (>30 dB) and this transistor amplifier gain was not enough for rebroadcasting signal, this project select another amplifier MAV-11SM from supervisor suggestion. One MAV-11SM amplifier gives around 10dB gain what has been shown in testing section. At last two MAV-11SM amplifiers and one HFA3127 has been used to get more than 30dB gain. It has been tested in network scalar analyzer. For field test, a TV card, three TV aerials have been used. The amplifier circuit has been connected with one aerial. It was working very well when it was directly connected with TV card. That it can be said that the repeater was amplifying signal.

But when another aerial with long transmission line was connected with amplifier and tried to rebroadcast the signal with 5v 1A power supply, TV picture quality was not improving expectedly. Digital repetition is an innovative concept, which helps to increase the DVB-T coverage while maintaining the highest quality and providing a greater flexibility. In spite of failure, this project was a high level platform to learn about signal and signalling. Future work: As this project is unsuccessful at that certain point, this project will try to solve the rebroadcasting problem. And the transistor array will be a great option to amplify signal if all five transistors are been used. From HFA3127, it is possible to get min of 120 dB gain if it is soldered perfectly. Works Cited

Antenna basics. (2008, October 12). Retrieved May 5, 2011, from http://www. hdtvprimer. com/ANTENNAS/basics. html. Audet, J. (2001). Coaxial Cable Delay. Charan, L. (2002). Inter symbos Interferance (ISI) and Raised Cosine filters. Retrieved December 5, 2010, from http://www. complextoreal. com/chapters/isi. pdf. Datasheet. (2005, December 21). Retrieved February 20, 2011, from http://www. intersil. com/data/fn/fn3076. pdf. digital spy. (2009). Retrieved April 10, 2011, from http://www. digitalspy. co. uk/digitaltv/information/a12613/uhf-channel-and-frequency-guide. html. Global Spec. (2008). Retrieved April 10, 2011, from http://www. globalspec. om/learnmore/telecommunications_networking/rf_microwave_wireless_components/rf_amplifiers. HFA3127. (2003). Retrieved January 18, 2011, from http://www. intersil. com/products/deviceinfo. asp? pn=HFA3127. Intregrated Publishing. (n. d. ). Retrieved April 4, 2011, from http://www. tpub. com/neets/book7/25c. htm. Monolithic Amplifier. (2002). Retrieved January 14, 2011, from http://www. minicircuits. com/pdfs/MAV-11SM+. pdf. Pool, I. (2002). Digital Video Broadcasting. Retrieved April 13, 2011, from http://www. radio-electronics. com/info/broadcast/digital-video-broadcasting/what-is-dvb-tutorial. php. Power Amplifier design. (1998). RF transmitting transistor and power ampli? er fundamentals . RF amplifier. (2008).

Retrieved April 10, 2011, from http://www. globalspec. com/learnmore/telecommunications_networking/rf_microwave_wireless_components/rf_amplifiers. sub-TV. (2006, October 13). Retrieved April 20, 2011, from http://www. sub-tv. co. uk/antennatheory. asp. Trolet, C. (2002). SPOT: filling gaps in DVB-T networks with digital repeaters. Presented by Gerard Faria, Scientific Director, Harris Broadcast Europe at BroadcastAsia2002 International Conference, Available at: http://www. broadcast. harris. com. Gantt chart APPENDICES Frequency Allocation for DVB-T in UK Band IV Channel| PAL-I Vision (MHz)| PAL-I Sound (MHz)| Centre (MHz)| 21| 471. 25| 477. 25| 474| 22| 479. 25| 485. 25| 482| 3| 487. 25| 493. 25| 490| 24| 495. 25| 501. 25| 498| 25| 503. 25| 509. 25| 506| 26| 511. 25| 517. 25| 514| 27| 519. 25| 525. 25| 522| 28| 527. 25| 533. 25| 530| 29| 535. 25| 541. 25| 538| 30| 543. 25| 549. 25| 546| 31| 551. 25| 557. 25| 554| 32| 559. 25| 565. 25| 562| 33| 567. 25| 573. 25| 570| 34| 575. 25| 581. 25| 578| 35| 583. 25| 589. 25| 586| 36| 591. 25| 597. 25| 594| 37| 599. 25| 605. 25| 602| 38| 607. 25| 613. 25| 610| Band V Channel| PAL-I Vision (MHz)| PAL-I Sound (MHz)| Centre (MHz)| 39| 615. 25| 621. 25| 618| 40| 623. 25| 629. 25| 626| 41| 631. 25| 637. 25| 634| 42| 639. 25| 645. 25| 642| 43| 647. 25| 653. 25| 650| 44| 655. 25| 661. 5| 658| 45| 663. 25| 669. 25| 666| 46| 671. 25| 677. 25| 674| 47| 679. 25| 685. 25| 682| 48| 687. 25| 693. 25| 690| 49| 695. 25| 701. 25| 698| 50| 703. 25| 709. 25| 706| 51| 711. 25| 717. 25| 714| 52| 719. 25| 725. 25| 722| 53| 727. 25| 733. 25| 730| 54| 735. 25| 741. 25| 738| 55| 743. 25| 749. 25| 746| 56| 751. 25| 757. 25| 754| 57| 759. 25| 765. 25| 762| 58| 767. 25| 773. 25| 770| 59| 775. 25| 781. 25| 778| 60| 783. 25| 789. 25| 786| 61| 791. 25| 797. 25| 794| 62| 799. 25| 805. 25| 802| 63| 807. 25| 813. 25| 810| 64| 815. 25| 821. 25| 818| 65| 823. 25| 829. 25| 826| 66| 831. 25| 837. 25| 834| 67| 839. 25| 845. 25| 842| 68| 847. 25| 853. 25| 850|

Free Essays

Huffman Trucking: Database Design and Development

Running head: HUFFMAN TRUCKING Huffman Trucking: Database Design and Development Huffman Trucking started out as a single owner, single truck and trailer, operating in the Cleveland Ohio area back in 1936 doing local contract hauls. Today, Huffman Trucking is a National carrier with 1,400 employees, 800 tractors, 2,100 trailers, and 260 roll-on/roll-off units, operating from 3 logistical hubs located in Los Angeles, California, St. Louis, Missouri, and Bayonne, New Jersey and its central maintenance facility located in Cleveland Ohio (Apollo Group Inc. , 2005).

With the growth through the years, Huffman Trucking has maintained their competitiveness by being an industry leader in leveraging technology to the maximum to provide customer service and business efficiencies (Apollo Group Inc. , 2005). In the means to maintain this competitiveness, Huffman Trucking hired Smith Systems Consulting to develop a report of entities and attributes that will be needed for a Fleet Truck Maintenance Database. Upon receipt of Smith’s report detailing the entities and attributes needed, our IT Manager submitted a Service Request SR-ht-003 to design a Fleet Truck Maintenance Database.

In the following paragraphs LTA will discuss the database architecture briefly and primary keys, which play a vital role in an Entity-Relational Database. The discussions of the different types of mistakes that are made in the design phase that led to a poor database design are also discussed. Mistakes include the lack of careful planning, proper normalization of data, poor naming conventions, lack of sufficient documentation and extensive testing. The ERD for the database will be revealed along with the choice of the program to manage the database and allow for versatility for various platforms, applications, and features.

Huffman Trucking’s fleet truck maintenance records are fairly straight-forward, therefore, a basic database design architecture is recommended as a start in the entry of information, and importing of current database records into the new basic database. By starting simple, this database can be upgraded over time, as the company grows and the fleet grows. The important items to consider when designing a new database include: ease of use for the users, the production of query reporting, as well as financial records, parts orders, maintenance records, and purchase rders. “A good model and a proper database design form the foundation of an information system. Building the data layer is often the first critical step towards implementing a new system, and getting it right requires attention to detail and a whole lot of careful planning. A database, like any computer system, is a model of a small piece of the real world. And, like any model, it’s a narrow representation that disregards much of the complexity of the real thing” (Malone, 2007). A primary key, which is a record or an attribute, uniquely identifies a table.

Primary keys make mapping relational data simple, in order to uniquely identify each entry in the database. The concept of some sort of unique value is common in database designing — using account numbers to identify part numbers, vendor numbers, and maintenance work orders. These are also known as natural keys, common entities that are used to uniquely identify objects. Generally, if the data that is being modeled has a decent natural key, or identifier, that information should not be used as a primary key.

Natural keys should not be used as primary keys, as the purpose of the primary key is to uniquely identify a value in a database record. Several primary key characteristics are the primary key must be able to identify each row in a table. The primary key should not describe the characteristics of the entity. A part number ID of “2566” is usually preferred over “Air Filter. ” The value of a primary key should never change. Changing a primary key value means changing the identity of an entity. Changing the identity is not advised. Non-intelligent keys are preferred because they are less likely to change.

For example, the part number 2566 for an Air Filter for one model of truck, and the part number of 2560 would be an Air Filter for another model of truck. To have just a part number of “Air Filter” would be too ambiguous, and could result in lost time trying to locate the correct air filter for a specific model of truck. Those part numbers, in general would most likely never change over time, therefore, are best to use as primary keys in a database of part numbers. Primary keys should have the smallest number of attributes possible.

It is easier to manage unique keys that are numeric. Items to Consider During Design Phase Several things that are easy to overlook during the database design process include design and planning of the database, normalization of data, insufficient naming conventions, documentation, and testing. A brief run-down of these common errors during the design phase of a database is listed below. By listing them now, it can be used as an effective guideline to follow when designing the database for Huffman Trucking’s Fleet Maintenance. Design and Planning of the Database

Good databases are designed with careful thought, and with the proper care and attention given to the needs of the data that will be part of it. Since a carefully constructed database is at the heart of every business project, insufficient planning and detailing of the needs of the project could cause the whole project to lose its direction and purpose. Additionally, by not taking the time at the beginning, any changes in the database structures that may be needed to be made in the future could cause devastating consequences on the whole project, and greatly increase the likelihood of the project timeline slipping.

If the planning phase is rushed, problems will inevitably arise, and because of the lack of proper planning and design, there is usually no time to go back and fix any issues properly. “That is when the ‘hacking’ starts, with the veiled promise to return and fix things later, something that happens very rarely indeed” (Davidson, 2007). Normalization of Data Normalization defines a set of standards to break down tables into their basic parts until each table represents only one thing, and its columns fully describe the only thing that the table represents.

Normalizing the Huffman Trucking’s data is important to ensure proper performance and ease of future development projects. Insufficient Naming Conventions Naming conventions are most the important line of documentation for any application. What is important to consider is the importance of consistency. Names should be kept simple while at the same time, identifying their purpose to the data being entered. Documentation Not only will a well-designed database conform to certainly quality tandards, it will also contain definitions and examples about its tables, so that its purpose is clear to everyone on how the tables, columns and relationships are intended to be used. The goal of proper documentation should be to provide enough information for a support programmer to find any bugs and fix them easily. Testing As many Information Technology professionals know, the first thing to be blamed when a business system starts running slow is because the database can get bogged down with fragmented information, or too much information.

Deep knowledge of the system is the best way to dispel this notion Unfortunately, testing is the usually one of the things to go in a project when time starts to run out. What is important to consider in this whole process is that deep system testing is done to make sure that the design is implemented correctly. The first real test is for any database is when it goes into production, and users attempt to do real work. And if the system does not work fast enough, or contains bugs when it goes live, then more work will have to done on a live system, which could inherently cause the loss of revenue of any company.

By insisting on strict testing as an important aspect of database development, then perhaps the day will come when the database will not be the first thing to be pointed out when the system slows down. In order to accomplish the goal of establishing a functional database that Huffman can use now and in the future to effectively manage their data, it is recommended that Huffman Trucking decide to use MySQL. There are many great things about MySQL, including the fact that MySQL is very popular among web applications and acts as a database for a multitude of platforms.

Some of these platforms include FreeBSD, BSDi, AIX, HP-UX, Linux, Novell NetWare, OS/2 Warp, Solaris, i5/OS, Windows 95, Solaris, Windows 98, SunOS, Windows ME, Windows 2000, Windows XP, and Windows Vista. MySQL is popular among open source code and bug tracking tools such as Bugzilla as well. MySQL is written in C and C++. Libraries that are used to access MySQL databases can be found in many of today’s programming languages by using language specific API’s. There is also an Open Database Connectivity (ODBC) that allows additional programming languages to communicate with MySQL, including ColdFusion or ASP.

MySQL features options that are not in many other RDBMSs. One feature that is not included in many RDBMSs is multiple storage engines. This feature allows for a user to select the most effective storage engine for each table in the application. Another great feature that MySQL offers is native storage engines. These are storage engines that are developed by MySQL and are optimized for specific application storage domain. They offer data warehousing, data archiving, high availability clustering, and many more features. MySQL recently developed a new advanced transactional storage engine called Falcon.

Falcon was designed for modern day corporations and web applications which makes it perfect for Huffman Trucking. One feature not to be overlooked is the availability of Partner-developed storage engines. Search engines that are partner developed are developed buy outside companies, but they are then highly tested by MySQL in order to ensure workability and compatibility with MySQL. MySQL also has open source programmers that are independent and develop storage engines. These are used as well, but only after they pass MySQL rigorous inspection and testing.

Customers are even developing and designing community storage systems. Commit grouping is a MySQL feature that allows for the gathering of multiple transactions. This is done from a multitude of connections in order to increase the number of commits per second. Conclusion In conclusion the Fleet Truck Maintenance Database will be easy to use, provide effective tracking of finances, maintenance, and queries. The primary key(s) used in the database will have the following characteristics: be a single attribute, uniquely identify an entity, be non-intelligent, not change over time and be numeric.

This will ensure the ease of normalizing the database during the design phase to prevent update anomalies when database is implemented. LTA discussed several mistakes that occur during the design phase in order to avoid the same mistakes. These mistakes include poor design/planning, ignoring normalization, poor naming standards, lack of documentation and testing. The DBMS of choice for Huffman Trucking is MySQL. MySQL will effectively manage our data while allowing many different platforms to interact with the database.

MySQL is written in C and C++. However, MySQL offers much versatility in programming languages by using language specific API’s or ODBC to allow additional programming languages such as ASP or Coldfusion. MySQL has many options that other RDBMSs do no possess such as multiple storage engines, open source programmers, commit grouping and more. Bottom line is that MySQL offers versatility for our database to allow for continued growth, updates and changes in our company’s needs. References Apollo Group Inc. (2005). Huffman Trucking.

Retrieved October 1, 2008, from Huffman Trucking Intranet: https://ecampus. phoenix. edu/secure/aapd/CIST/VOP/Business/Huffman/HuffmanHome002. htm Davidson L. , (2007). Ten Common Database Design Mistakes. Simple-Talk. com. Retrieved September 29, 2008, from http://www. simple-talk. com/sql/database-administration/ten-common-database-design-mistakes/ Malone M, (2007). I’m Mike. Database Design: Choosing A Primary Key. Retrieved October 1, 2008, from http://immike. net/blog/2007/08/14/database-design-choosing-a-primary-key/