Monday, June 29, 2009

16) SOFTWARE ENGINEERING

Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software.




The term software engineering first appeared in the 1968 NATO Software Engineering Conference and was meant to provoke thought regarding the current "software crisis" at the time. Since then, it has continued as a profession and field of study dedicated to creating software that is of higher quality, more affordable, maintainable, and quicker to build. Since the field is still relatively young compared to its sister fields of engineering, there is still much debate around what software engineering actually is, and if it conforms to the classical definition of engineering. It has grown organically out of the limitations of viewing software as just programming. Software development is a term sometimes preferred by practitioners[who?] in the industry who view software engineering as too heavy-handed and constrictive to the malleable process of creating software. Although software engineering is a young profession, the field's future looks bright as Money Magazine and Salary.com rated software engineering as the best job in America in 2006. Furthermore, if you rank the number of engineers in the United States by discipline, the number of software engineers tops the list.

History
When the modern digital computer first appeared in 1941, the instructions to make it operate were wired into the machine. Practitioners quickly realized that this design was not flexible and came up with the "stored program architecture" or von Neumann architecture. Thus the first division between "hardware" and "software" began with abstraction being used to deal with the complexity of computing.

Programming languages started to appear in the 1950s and this was also another major step in abstraction. Major languages such as Fortran, Algol, and Cobol were released in the late 1950s to deal with scientific, algorithmic, and business problems respectively. E. W. Dijsktra wrote his seminal paper, "Go To Statement Considered Harmful", in 1968 and David Parnas introduced the key concept of modularity and information hiding in 1972 to help programmers deal with the ever increasing complexity of software systems. A software system for managing the hardware called an operating system was also introduced, most notably by Unix in 1969. In 1967, the Simula language introduced the object-oriented programming paradigm.

These advances in software were met with more advances in computer hardware. In the mid 1970s, the microcomputer was introduced, making it economical for hobbyists to obtain a computer and write software for it. This in turn lead to the now famous Personal Computer or PC and Microsoft Windows. The Software Development Life Cycle or SDLC was also starting to appear as a consensus for centralized construction of software in the mid 1980s. The late 1970s and early 1980s saw the introduction of several new Simula-inspired object-oriented programming languages, including C++, Smalltalk, and Objective C.

Open-source software started to appear in the early 90s in the form of Linux and other software introducing the "bazaar" or decentralized style of constructing software. Then the Internet and World Wide Web hit in the mid 90s changing the engineering of software once again. Distributed Systems gained sway as a way to design systems and the Java programming language was introduced as another step in abstraction having its own virtual machine. Programmers collaborated and wrote the Agile Manifesto that favored more light weight processes to create cheaper and more timely software.

Profession
While some areas, such as Ontario, Canada license software engineers, most places in the world have no laws regarding the profession of software engineers. Yet there are some guides from the IEEE Computer Society and the ACM, the two main professional organizations of software engineering. The IEEE's Guide to the Software Engineering Body of Knowledge - 2004 Version or SWEBOK defines the field and gives a coverage of the knowledge practicing software engineers should have. There is also an IEEE "Software Engineering Code of Ethics". In addition, there is a Software and Systems Engineering Vocabulary (SEVOCAB), published on-line by the IEEE Computer Society.

In the UK, the British Computer Society licenses software engineers and members of the society can also become Chartered Engineers (CEng). But there is no legal requirement to have these qualifications.

Employment
In 2004, the U. S. Bureau of Labor Statistics counted 760,840 software engineers holding jobs in the U.S.; in the same time period there were some 1.4 million practitioners employed in the U.S. in all other engineering disciplines combined. Due to its relative newness as a field of study, formal education in software engineering is often taught as part of a computer science curriculum, and as a result most software engineers hold computer science degrees.

Most software engineers work as employees or contractors. Software engineers work with businesses, government agencies (civilian or military), and non-profit organizations. Some software engineers work for themselves as freelancers. Some organizations have specialists to perform each of the tasks in the software development process. Other organizations require software engineers to do many or all of them. In large projects, people may specialize in only one role. In small projects, people may fill several or all roles at the same time. Specializations include: in industry (analysts, architects, developers, testers, technical support, managers) and in academia (educators, researchers).

There is considerable debate over the future employment prospects for software engineers and other IT professionals. For example, an online futures market called the "ITJOBS Future of IT Jobs in America" attempts to answer whether there will be more IT jobs, including software engineers, in 2012 than there were in 2002.

Certification
Professional certification of software engineers is a contentious issue. Some see it as a tool to improve professional practice; "The only purpose of licensing software engineers is to protect the public".

The ACM had a professional certification program in the early 1980s, which was discontinued due to lack of interest. The ACM examined the possibility of professional certification of software engineers in the late 1990s, but eventually decided that such certification was inappropriate for the professional industrial practice of software engineering. As of 2006[update], the IEEE had certified over 575 software professionals. In the U.K. the British Computer Society has developed a legally recognized professional certification called Chartered IT Professional (CITP), available to fully qualified Members (MBCS). In Canada the Canadian Information Processing Society has developed a legally recognized professional certification called Information Systems Professional (ISP). The Software Engineering Institute offers certification on specific topic such as Security, Process improvement and Software architecture.

Most certification programs in the IT industry are oriented toward specific technologies, and are managed by the vendors of these technologies. These certification programs are tailored to the institutions that would employ people who use these technologies.

Education
A knowledge of programming is the main pre-requisite to becoming a software engineer, but it is not sufficient. Many software engineers have degrees in Computer Science due to the lack of software engineering programs in higher education. However, this has started to change with the introduction of new software engineering degrees, especially in post-graduate education. A standard international curriculum for undergraduate software engineering degrees was defined by the CCSE.

Steve McConnell opines that because most universities teach computer science rather than software engineering, there is a shortage of true software engineers. In 2004 the IEEE Computer Society produced the SWEBOK, which has become an ISO standard describing the body of knowledge covered by a software engineer.

The European Commission within the Erasmus Mundus Programme offers a European master degree called European Master on Software Engineering for students from Europe and also outside Europe. This is a joint program (double degree) involving 4 universities in Europe.

CONTENTS OF SOFTWARE ENGINEERING

1- COMPUTER - AIDED ENGINEERING
Computer-aided engineering (often referred to as CAE) is the use of information technology to support engineers in tasks such as analysis, simulation, design, manufacture, planning, diagnosis, and repair.




Software tools that have been developed to support these activities are considered CAE tools. CAE tools are being used, for example, to analyze the robustness and performance of components and assemblies. The term encompasses simulation, validation, and optimization of products and manufacturing tools. In the future, CAE systems will be major providers of information to help support design teams in decision making.

In regard to information networks, CAE systems are individually considered a single node on a total information network and each node may interact with other nodes on the network.

CAE systems can provide support to businesses. This is achieved by the use of reference architectures and their ability to place information views on the business process. Reference architecture is the basis from which information model, especially product and manufacturing models.

The term CAE has also been used by some in the past to describe the use of computer technology within engineering in a broader sense than just engineering analysis. It was in this context that the term was coined by Dr. Jason Lemon, founder of SDRC in the late 70's. This definition is however better known today by the terms CAx and PLM.

2- CRYPTOGRAPHIC ENGINEERING
Cryptographic engineering is the discipline of using cryptography to solve human problems. Cryptography is typically applied when trying to ensure data confidentiality, to authenticate people or devices, or to verify data integrity in risky environments.

In modern practice, cryptographic engineering is deployed in crypto systems. Like most engineering design, these are wholly human creations. Most crypto systems are computer software, either embedded in firmware or running as ordinary executable files under an operating system. In some system designs, the cryptography runs under manual direction, in others, it is run automatically, often in the background. Like other software design, and unlike most other engineering, there are few external constraints.

In other engineering design, a successful design or implementation of one, is one which 'works'. Thus, an aircraft which actually flies without crashing due to some aerodynamic design blunder is a successful design. How successful is important, of course, and depends on how well it meets intended performance criteria. Continuing with the aircraft example, several WWI fighter aircraft designs only barely flew, while others flew well (at least one design flew well, but its wings broke off with some regularity) though with insufficient agility (turning, climbing, ..., rates) or insufficient stability (too frequent inescapeable spins and so on) to be useful or survivable. To a considerable extent, good agility in aircraft is inversely related to inadequate stability, so fighter aircraft designs are, in this respect, inevitable compromises. The same considerations have continued in more recent times, as for instance the necessity for computer 'fly-by-wire' control in some fighters with great agility.

Cryptographic designs also have performance goals (eg, unbreakability of encryption), but must perform in a more complex, and more complexly hostile, environment than merely high (but not too low) in the Earth's atmosphere under war conditions.

Some aspects of the conditions under which crypto designs must work (to be successful and so worth bothering with) have been long recognized. Sensible cipher designers (of which there were fewer than their users would have wanted) attempted to find ways to prevent frequency analysis success, starting, it must be assumed, almost immediately after that cryptanalytic technique was first used. The most effective way to defeat frequency analysis attacks was the polyalphabetic substitution cipher, invented by Alberti about 1465. For the next several hundred years, other designers also tried to evade frequency analysis, usually poorly, demonstrating that few had a clear understanding of the problem. What is probably the best known (and likely the widest used) of those attempts is the (misnamed) Vigenère cipher which is a partial implementation of Alberti's idea. Edgar Allan Poe famously, and rashly, boasted that no cipher could defeat his cryptanalytic talents (essentially frequency analysis); that he was almost entirely correct about the ciphertexts submitted to him suggests a low level of cryptographic awareness some 400 (!) years after Alberti. As this history suggests, an important part of crypto engineering is understanding the techniques the Opposition may have available.

3- TELETRAFFIC ENGINEERING
Teletraffic engineering is the application of traffic engineering theory to telecommunications. Teletraffic engineers use their basic knowledge of statistics including Queueing theory, the nature of traffic, their practical models, their measurements and simulations to make predictions and to plan telecommunication networks at minimum total cost. These tools and basic knowledge help provide reliable service at lower cost. Because the approach is so different to different networks, the networks are handled separately here: the PSTN, broadband networks, mobile networks, and networks where the possibility of traffic being heavy is more frequent than anticipated.

Traffic engineering uses statistical techniques such as queuing theory to predict and engineer the behaviour of telecommunications networks such as telephone networks or the Internet.

The field was created by the work of A. K. Erlang in whose honour the unit of telecommunications traffic intensity, the Erlang , is named. The derived unit of traffic volume also incorporates his name. His Erlang distributions are still in common use in telephone traffic engineering.

The crucial observation in traffic engineering is that in large systems the law of large numbers can be used to make the aggregate properties of a system over a long period of time much more predictable than the behaviour of individual parts of the system.

The queueing theory originally developed for circuit-switched networks is applicable to packet-switched networks.

The most notable difference between these sub-fields is that packet-switched data traffic is self-similar. This is a consequence of the calls being between computers, and not people.

Teletraffic theory was first developed by Agner Erlang for circuit-switched architectures such as the PSTN. As such, the basics of teletraffic theory is best introduced by examining teletraffic concepts as they relate to PSTNs.

The measurement of traffic in PSTNs allows network operators to determine and maintain the Quality of Service (QoS) and in particular the Grade of service (GoS) that they offer their subscribers. The QoS of a network must be maintained or else operators will lose subscribers. The performance of a network depends on whether all origin-destination pairs are receiving a satisfactory service.

Networks are handled as:

loss systems where calls that cannot be handled are given equipment busy tone or
queuing systems where calls that cannot be handled immediately are queued.
Congestion is defined as the situation when exchanges or circuit groups are inundated with calls and are unable to serve all the subscribers. Special attention must be given to ensure that such high loss situations do not arise. To help determine the probability of congestion occurring, operators should use the Erlang Equations or the Engset calculation.

Exchanges in the PSTN make use of Trunking concepts to help minimize the cost of the equipment to the operator. Modern switches generally have full availability and do not make use of Grading concepts.

Overflow systems make use of alternative routing circuit groups or paths to transfer excess traffic and thereby reduce the possibility of congestion.

Queueing systems used in telephone networks have been studied as a science. See queuing theory. For example subscribers are queued until they can be served. If subscribers are made to wait too long, they may lose patience and default from the queue, resulting in no service being provided.

A very important component in PSTNs is the SS7 Network used to route signalling traffic. As a supporting network, it carries all the signaling messages necessary to set up, break down or provide extra services. The signaling enables the PSTN control the manner in which traffic is routed from one location to another.

Transmission and switching of calls is performed using the principle of Time-Division Multiplexing (TDM). TDM allows multiple calls to be transmitted along the same physical path, reducing the cost of infrastructure.

A good example of the use of teletraffic theory in practice is in the design and management of a call center. Call centers use teletraffic theory to increase the efficiency of their services and overall profitability through calculating how many operators are really needed at each time of the day.

Teletraffic engineering in broadband networks
Teletraffic Engineering is a well-understood discipline in the traditional voice network, where traffic patterns are established, growth rates can be predicted, and vast amounts of detailed historical data are available for analysis. However, in modern Broadband Networks, the teletraffic engineering methodologies used for voice networks are inappropriate. Various aspects relating to teletraffic engineering in broadband networks are discussed in this article.

4- WEB ENGINEERING
The World Wide Web has become a major delivery platform for a variety of complex and sophisticated enterprise applications in several domains. In addition to their inherent multifaceted functionality, these Web applications exhibit complex behavior and place some unique demands on their usability, performance, security and ability to grow and evolve.

However, a vast majority of these applications continue to be developed in an ad-hoc way, contributing to problems of usability, maintainability, quality and reliability. While Web development can benefit from established practices from other related disciplines, it has certain distinguishing characteristics that demand special considerations.

In the recent years, there have been some developments towards addressing these problems and requirements. As an emerging discipline, Web engineering actively promotes systematic, disciplined and quantifiable approaches towards successful development of high-quality, ubiquitously usable Web-based systems and applications.

In particular, Web engineering focuses on the methodologies, techniques and tools that are the foundation of Web application development and which support their design, development, evolution, and evaluation. Web application development has certain characteristics that make it different from traditional software, information system, or computer application development.

Web engineering is multidisciplinary and encompasses contributions from diverse areas: systems analysis and design, software engineering, hypermedia/hypertext engineering, requirements engineering, human-computer interaction, user interface, information engineering, information indexing and retrieval, testing, modelling and simulation, project management, and graphic design and presentation.

Web engineering is neither a clone, nor a subset of software engineering, although both involve programming and software development. While Web Engineering uses software engineering principles, it encompasses new approaches, methodologies, tools, techniques, and guidelines to meet the unique requirements of Web-based applications.


Web engineering as a discipline
Proponents of web engineering supported the establishment of web engineering as a discipline at an early stage of web. First Workshop on Web Engineering was held in conjunction with World Wide Web Conference held in Brisbane, Australia, in 1998. San Murugesan, Yogesh Deshpande, Steve Hansen and Athula Ginige, from University of Western Sydney, Australia formally promoted web engineering a new discipline in the first ICSE workshop on Web Engineering in 1999 [3]. Since then they published a serial of papers in a number of journals, conferences and magazines to promote their view and got wide support. Major arguments for web engineering as a new discipline are:

WIS (Web Information System) and WIS development process are different and unique.[6]
Web engineering is multi-disciplinary; no single discipline (such as software engineering) can provide complete theory basis, body of knowledge and practices to guide WIS development.
Issues of evolution and lifecycle management when compared to more 'traditional' applications.
Web based information systems and applications are pervasive and non-trivial. The prospect of web as a platform will continue to grow and it is worth being treated specifically.
However, it has been controversial, especially for people in other traditional disciplines such as software engineering, to recognize web engineering as a new field. The issue is how different and independent web engineering is, compared with other disciplines.

Main topics of Web engineering include, but are not limited to, the following areas:


Web Process & Project Management Disciplines
Development Process and Process Improvement of Web Applications
Web Project Management and Risk Management
Collaborative Web Development

Web Requirements Modeling Disciplines
Business Processes for Applications on the Web
Process Modelling of Web applications
Requirements Engineering for Web applications

Web System Design Disciplines, Tools & Methods
UML and the Web
Conceptual Modeling of Web Applications (aka. Web modeling)
Prototyping Methods and Tools
Web design methods
CASE Tools for Web Applications
Web Interface Design
Data Models for Web Information Systems

Web System Implementation Disciplines
Integrated Web Application Development Environments
Code Generation for Web Applications
Software Factories for/on the Web
Web 2.0, AJAX, E4X, Asp.net2.0,Asp.net3.0 and Other New Developments
Web Services Development and Deployment
Empirical Web Engineering

Web System Testing Disciplines
Testing and Evaluation of Web systems and Applications
Testing Automation, Methods and Tools f

Web Applications Categories Disciplines
Semantic Web applications
Ubiquitous and Mobile Web Applications
Mobile Web Application Development
Device Independent Web Delivery
Localization and Internationalization Of Web Applications

Web Quality Attributes Disciplines
Web Metrics, Cost Estimation, and Measurement
Personalisation and Adaptation of Web applications
Web Quality
Usability of Web Applications
Web accessibility
Performance of Web-based applications

Content-related Disciplines
Web Content Management
Multimedia Authoring Tools and Software
Authoring of adaptive hypermedia

BOOKS ON SOFTWARE ENGINEERING





No comments:

Post a Comment