Pages

Thursday, April 24, 2008

Winweb - how to reset system data

During the learning process, it is common that you feel unhappy with the way you enter the data and you wish to undo everything and restart from scratch.

In relation to this, Winweb provides a way for you to reset the system.

First, go to the MySetting Menu (refer image 1). Then, click the menu label "Data Backup"




You'll see the following form (refer image 2). There is a button labeled as "reset". Pressing this button means instructing the system to empty itself as if you are starting a new application. While it is helpful to clear up the mess, be reminded that all previous data would be gone forever. It is advisable that you do the back up first before resetting the system

Winweb's MyAccounting Interface


MyAccounting Interface


http://www.box.net/shared/04gbdkfwgo

Wednesday, April 9, 2008

Programming Tool

A programming tool or software tool is a program or application that software developers use to create, debug, maintain, or otherwise support other programs and applications. The term usually refers to relatively simple programs that can be combined together to accomplish a task, much as one might use multiple hand tools to fix a physical object.

http://en.wikipedia.org/wiki/Programming_tool

Prototyping

Prototyping is the process of quickly putting together a working model (a prototype) in order to test various aspects of a design, illustrate ideas or features and gather early user feedback. Prototyping is often treated as an integral part of the system design process, where it is believed to reduce project risk and cost. Often one or more prototypes are made in a process of iterative and incremental development where each prototype is influenced by the performance of previous designs, in this way problems or deficiencies in design can be corrected. When the prototype is sufficiently refined and meets the functionality, robustness, manufacturability and other design goals, the product is ready for production.

http://en.wikipedia.org/wiki/Prototyping

Software Validation and Verification

Verification ensures that the final product satisfies or matches the original design (low-level checking) — i.e., you built the product right. This is done through static testing.

Validation checks that the product design satisfies or fits the intended usage (high-level checking) — i.e., you built the right product. This is done through dynamic testing and other forms of review.

According to the Capability Maturity Model (CMMI-SW v1.1), “Validation - The process of evaluating software during or at the end of the development process to determine whether it satisfies specified requirements. [IEEE-STD-610] Verification- The process of evaluating software to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase. [IEEE-STD-610]."

In other words, verification is ensuring that the product has been built according to the requirements and design specifications, while validation ensures that the product actually meets the user's needs, and that the specifications were correct in the first place. Verification ensures that ‘you built it right’. Validation confirms that the product, as provided, will fulfill its intended use. Validation ensures that ‘you built the right thing’.

http://en.wikipedia.org/wiki/Verification_and_Validation_%28software%29



Software verification

Software verification is a broad and complex discipline of software engineering whose goal is to assure that software fully satisfies all the expected requirements.

http://en.wikipedia.org/wiki/Software_verification

Royce's Waterfall Model

The unmodified "waterfall model".  Progress flows from the top to the bottom, like a waterfall.
The unmodified "waterfall model". Progress flows from the top to the bottom, like a waterfall.


In Royce's original waterfall model, the following phases are followed in order:

  1. Requirements specification
  2. Design
  3. Construction (AKA implementation or coding)
  4. Integration
  5. Testing and debugging (AKA validation)
  6. Installation
  7. Maintenance

To follow the waterfall model, one proceeds from one phase to the next in a purely sequential manner. For example, one first completes requirements specification, which are set in stone. When the requirements are fully completed, one proceeds to design. The software in question is designed and a blueprint is drawn for implementers (coders) to follow — this design should be a plan for implementing the requirements given. When the design is fully completed, an implementation of that design is made by coders. Towards the later stages of this implementation phase, disparate software components produced by different teams are integrated. After the implementation and integration phases are complete, the software product is tested and debugged; any faults introduced in earlier phases are removed here. Then the software product is installed, and later maintained to introduce new functionality and remove bugs.

Thus the waterfall model maintains that one should move to a phase only when its preceding phase is completed and perfected. Phases of development in the waterfall model are discrete, and there is no jumping back and forth or overlap between them.

However, there are various modified waterfall models (including Royce's final model) that may include slight or major variations upon this process.

http://en.wikipedia.org/wiki/Waterfall_model

Typical CASE tools

Computer-aided software engineering (CASE) is the use of software tools to assist in the development and maintenance of software. Tools used to assist in this way are known as CASE Tools.

Some typical CASE tools are:

All aspects of the software development lifecycle can be supported by software tools, and so the use of tools from across the spectrum can, arguably, be described as CASE; from project management software through tools for business and functional analysis, system design, code storage, compilers, translation tools, test software, and so on.

However, it is the tools that are concerned with analysis and design, and with using design information to create parts (or all) of the software product, that are most frequently thought of as CASE tools. CASE applied, for instance, to a database software product, might normally involve:

  • Modelling business / real world processes and data flow
  • Development of data models in the form of entity-relationship diagrams
  • Development of process and function descriptions
  • Production of database creation SQL and stored procedures
http://en.wikipedia.org/wiki/Computer_aided_software_engineering

Joint Requirements Development Sessions (JRD)


Joint Requirements Development Sessions (a.k.a., Requirement Workshops)

Requirements often have cross-functional implications that are unknown to individual stakeholders and often missed or incompletely defined during stakeholder interviews. These cross-functional implications can be elicited by conducting JRD sessions in a controlled environment, facilitated by a trained facilitator, wherein stakeholders participate in discussions to elicit requirements, analyze their details and uncover cross-functional implications. A dedicated scribe and Business Analyst should be present to document the discussion. Utilizing the skills of a trained facilitator to guide the discussion frees the Business Analyst to focus on the requirements definition process.
http://en.wikipedia.org/wiki/Requirements_engineering

Stakeholders inhibit requirements gathering

Stakeholder issues

Steve McConnell, in his book Rapid Development, details a number of ways users can inhibit requirements gathering:

  • Users don't understand what they want or users don't have a clear idea of their requirements
  • Users won't commit to a set of written requirements
  • Users insist on new requirements after the cost and schedule have been fixed.
  • Communication with users is slow
  • Users often do not participate in reviews or are incapable of doing so.
  • Users are technically unsophisticated
  • Users don't understand the development process.
  • Users don't know about present technology.

This may lead to the situation where user requirements keep changing even when system or product development has been started.

http://en.wikipedia.org/wiki/Requirements_engineering

Agile software development

Agile software development is a conceptual framework for software engineering that promotes development iterations throughout the life-cycle of the project.

There are many agile development methods; most minimize risk by developing software in short amounts of time. Software developed during one unit of time is referred to as an iteration, which may last from one to four weeks. Each iteration is an entire software project: including planning, requirements analysis, design, coding, testing, and documentation. An iteration may not add enough functionality to warrant releasing the product to market but the goal is to have an available release (without bugs) at the end of each iteration. At the end of each iteration, the team re-evaluates project priorities.

Agile methods emphasize face-to-face communication over written documents. Most agile teams are located in a single open office sometimes referred to as a bullpen. At a minimum, this includes programmers and their "customers" (customers define the product; they may be product managers, business analysts, or the clients). The office may include testers, interaction designers, technical writers, and managers.

Agile methods also emphasize working software as the primary measure of progress. Combined with the preference for face-to-face communication, agile methods produce very little written documentation relative to other methods. This has resulted in criticism of agile methods as being undisciplined.

Rapid application development (RAD)

Rapid application development (RAD), is a software development process developed initially by James Martin in 1991. The methodology involves iterative development, and the construction of prototypes. Traditionally the rapid application development approach involves compromises in usability, features, and/or execution speed. It is described as a process through which the development cycle of an application is expedited. Rapid Application Development thus enables quality products to be developed faster, saving valuable resources.

http://en.wikipedia.org/wiki/Rapid_application_development

software crisis

The software crisis was a term used in the early days of software engineering, before it was a well-established subject. The term was used to describe the impact of rapid increases in computer power and the complexity of the problems which could be tackled. In essence, it refers to the difficulty of writing correct, understandable, and verifiable computer programs. The roots of the software crisis are complexity, expectations, and change.

Conflicting requirements have always hindered the software development process. For example, while users demand a large number of features, customers generally want to minimise the amount they must pay for the software and the time required for its development.

The term software crisis was coined by F. L. Bauer at the first NATO Software Engineering Conference in 1968 at Garmisch, Germany. An early use of the term is in Edsger Dijkstra's 1972 ACM Turing Award Lecture, "The Humble Programmer" (EWD340), published in the Communications of the ACM. Dijkstra states:

[The major cause of the software crisis is] that the machines have become several orders of magnitude more powerful! To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem.

Edsger Dijkstra, The Humble Programmer

The causes of the software crisis were linked to the overall complexity of the software process and the relative immaturity of software engineering as a profession. The crisis manifested itself in several ways:

  • Projects running over-budget.
  • Projects running over-time.
  • Software was of low quality.
  • Software often did not meet requirements.
  • Projects were unmanageable and code difficult to maintain.

Various processes and methodologies have been developed over the last few decades to "tame" the software crisis, with varying degrees of success. However, it is widely agreed that there is no "silver bullet" ― that is, no single approach which will prevent project overruns and failures in all cases. In general, software projects which are large, complicated, poorly-specified, and involve unfamiliar aspects, are still particularly vulnerable to large, unanticipated problems.


http://en.wikipedia.org/wiki/Software_crisis


Legacy system

Legacy system

A legacy system is an old computer system or application program that continues to be used because the user (typically an organization) does not want to replace or redesign it.


Overview

Legacy systems are considered to be potentially problematic by many software engineers (for example, see Bisbal et al., 1999) for several reasons. Legacy systems often run on obsolete (and usually slow) hardware, and sometimes spare parts for such computers become increasingly difficult to obtain. These systems are often hard to maintain, improve, and expand because there is a general lack of understanding of the system. The designers of the system may have left the organization, leaving no one left to explain how it works. Such a lack of understanding can be exacerbated by inadequate documentation or manuals getting lost over the years. Integration with newer systems may also be difficult because new software may use completely different technologies.

Despite these problems, organizations can have compelling reasons for keeping a legacy system, such as:

  • The costs of redesigning the system are prohibitive because it is large, monolithic, and/or complex.
  • The system requires close to 100% availability, so it cannot be taken out of service, and the cost of designing a new system with a similar availability level is high.
  • The way the system works is not well understood. Such a situation can occur when the designers of the system have left the organization, and the system has either not been fully documented or such documentation has been lost.
  • The user expects that the system can easily be replaced when this becomes necessary.
  • The system works satisfactorily, and the owner sees no reason for changing it; or in other words, re-learning a new system would have a prohibitive attendant cost in lost time and money.
http://en.wikipedia.org/wiki/Legacy_system

Smart Computing


Smart Computing

Truth table

Truth table


A truth table is a mathematical table used in logic — specifically in connection with Boolean algebra, boolean functions, and propositional calculus — to compute the functional values of logical expressions on each of their functional arguments, that is, on each combination of values taken by their logical variables. In particular, truth tables can be used to tell whether a propositional expression is true for all legitimate input values, that is, logically valid.

"The pattern of reasoning that the truth table tabulates was Frege's, Peirce's, and Schröder's by 1880. The tables have been prominent in literature since 1920 (Lukasiewicz, Post, Wittgenstein)" (Quine, 39). Lewis Carroll had formulated truth tables as early as 1894 to solve certain problems, but his manuscripts containing his work on the subject were not discovered until 1977 [1]. Wittgenstein's Tractatus Logico-Philosophicus uses them to place truth functions in a series. The wide influence of this work led to the spread of the use of truth tables.

Truth tables are used to compute the values of propositional expressions in an effective manner that is sometimes referred to as a decision procedure. A propositional expression is either an atomic formula — a propositional constant, propositional variable, or propositional function term (for example, Px or P(x)) — or built up from atomic formulas by means of logical operators, for example, AND (\land), OR (\lor), NOT (\lnot). For instance, Fx \land Gx is a propositional expression.

The column headings on a truth table show (i) the propositional functions and/or variables, and (ii) the truth-functional expression built up from those propositional functions or variables and operators. The rows show each possible valuation of T or F assignments to (i) and (ii). In other words, each row is a distinct interpretation of (i) and (ii).

Truth tables for classical logic are limited to Boolean logical systems in which only two logical values are possible, false and true, usually written F and T, or sometimes 0 or 1, respectively.


Applications of truth tables in digital electronics

In digital electronics (and computer science, fields of engineering derived from applied logic and math), truth tables can be used to reduce basic boolean operations to simple correlations of inputs to outputs, without the use of logic gates or code. For example, a binary addition can be represented with the truth table:

A B | C R
1 1 | 1 0
1 0 | 0 1
0 1 | 0 1
0 0 | 0 0

where

A = First Operand
B = Second Operand
C = Carry
R = Result

This truth table is read left to right:

  • Value pair (A,B) equals value pair (C,R).
  • Or for this example, A plus B equal result R, with the Carry C.

Note that this table does not describe the logic operations necessary to implement this operation, rather it simply specifies the function of inputs to output values.

In this case it can only be used for very simple inputs and outputs, such as 1's and 0's, however if the number of types of values one can have on the inputs increases, the size of the truth table will increase.

http://en.wikipedia.org/wiki/Truth_table

Offshoring

Offshoring describes the relocation of business processes from one country to another. This includes any business process such as production, manufacturing, or services.

Offshoring can be seen in the context of either production offshoring or services offshoring. After its accession to the WTO in 2001, China emerged as a prominent destination for production offshoring. After technical progress in telecommunications improved the possibilities of trade in services, India became a country leading in this domain though many parts of the world are now emerging as offshore destinations.

The economic logic is to reduce costs. If some people can use some of their skills more cheaply than others, those people have the comparative advantage. The idea is that countries should freely trade the items that cost the least for them to produce.

http://en.wikipedia.org/wiki/Offshoring

Outsourcing

Outsourcing involves the transfer of the management and/or day-to-day execution of an entire business function to an external service provider.[2] The client organization and the supplier enter into a contractual agreement that defines the transferred services. Under the agreement the supplier acquires the means of production in the form of a transfer of people, assets and other resources from the client. The client agrees to procure the services from the supplier for the term of the contract. Business segments typically outsourced include information technology, human resources, facilities and real estate management, and accounting. Many companies also outsource customer support and call center functions like telemarketing, customer service, market research, manufacturing, designing, web development, content writing, ghostwriting and engineering.

http://en.wikipedia.org/wiki/Outsourcing

Distributed Development

A Distributed Development project is a research & development project that is done across many business worksites or locations. It is a form of R&D where the project members may not see each other face to face, but they are all working collaboratively toward the outcome of the project. Often this is done through email, the Internet and other forms of quick long-distance communication.

It is different from outsourcing because all of the organizations are working together on an equal level, instead of one organization subcontracting the work to another.

It also is similar to, but different from, a virtual team because there is a research element.

http://en.wikipedia.org/wiki/Distributed_Development

Tuesday, April 1, 2008

IT & People 0804

TOPICS:
1. Introduction To IT
2. Internet and WWW
3. Software Applications
4. Computer Hardware
5. Microsoft Word Lab
6. Microsoft Excel Lab
7. Microsoft Power Point Lab
8. System Software
9. Communication
10. Database
11. Security and Ethics
12. IS Development
13. Programming
14. Presentation & Revision

Introduction To Computing 0804

TOPICS:
1. Introduction
2. Internet
3. World Wide Web
4. Computer Hardware
5. Machine Architecture
6. Machine Code
7. Machine Code
8. Memory Organization
9. Input
10. Output
11. System Software
12. Secondary Storage
13. Database
14. Revision

ACTIVITY
1.
2.
3. Summarizing articles on wikipedia. (details)

Information Technology 0804b


TOPICS

1. Introduction To IT (note)

2. Internet and WWW (note)

3. Software Applications (note)

4. Computer Hardware (note)

5. Input (note)

6. Output (note)

7. Storage (note)

8. System Software (note)

9. Communication (note)

10. Database (note)

11. Security and Ethics (note)

12. IS Development (note)

13. Programming (note)

14. Presentation & Revision (note)

Download all notes,

Information Technology 0804a


TOPICS

1. Introduction To IT (note)

2. Internet and WWW (note)

3. Software Applications (note)

4. Computer Hardware (note)

5. Input (note)

6. Output (note)

7. Storage (note)

8. System Software (note)

9. Communication (note)

10. Database (note)

11. Security and Ethics (note)

12. IS Development (note)

13. Programming (note)

14. Presentation & Revision (note)

Download all notes,
Stats