Pages

Tuesday, October 23, 2012

Portable USB Web Server Bundles


http://www.usbwebserver.net/en/

http://www.server2go-web.de/

http://www.uwamp.com/

http://microapache.kerys.co.uk/

SharePoint Web Application vs Site Collection vs Site vs Sub site



SharePoint Web Application vs Site Collection vs Site vs Sub site Lately I'm playing a lot with WSS and MOSS. It wasn't really clear for me what the difference is between a web application, a site collection, a site and a sub site. This is what I found out. On top of the hierarchy is the web application. In technical terms, this is a web site in IIS (with an application pool associated). A web application needs to have at least 1 Site Collection. The Site Collection is the root site of the web site. Below the Site Collection, there can be one or more Sites. And a Site can contains sub sites.
An overview:
1. Web Application
1.1 Site Collection (SPSite)
1.1.1 Site (SPWeb)
1.1.2 Site (SPWeb)
1.2 Site Collection (SPSite)
1.2.1 Site (SPWeb)
1.2.1.1 Sub Site (SPWeb)
1.2.1.2 Sub Site (SPWeb)
The words between ( ) are the objects of the SharePoint API.
Backup
You cannot backup a web application from within SharePoint. But what you can do is save the configuration in IIS.
With stsadm.exe -o backup -url <site URL> -filename <filename> you can create a backup of a site and all sub sites. And you can restore it with stsadm.exe -o restore -url <site URL> -filename <filename>.
As far as I know there is no built-in option to backup or restore a sub site of SharePoint.
Web Applications
In my case I created three web applications. One for my private SharePoint, one for my blog (new blog soon) and another which allows anonymous access, but IP-filtered. Because each web application of SharePoint is a web application in IIS, you can setup all the security properties of IIS, like IP-based security.

Sharepoint detailed step by step installation



1. Active Directory

add users
-SPSQL
-SPADMIN
-SPFARM

add them to OU
-sharepoint

set spadmin to be domain admin members

2. install Sharepoint Prerequisite

3. install MS SQL 2008 64bit
-add user mydomain\SPSQL for SQL Agent & SQL Engine.

4. open ms sql studio
- security \login \ add user mydomain\spadmin
- server role (dbcreator, public, securityadmin)

5. install sharepoint server
- log in as spadmin
- key : VK7BD-VBKWR-6FHD9-Q3HM9-6PKMX
- select create server farm
- DB server : DCPC\SQLexpress     <-- (server name \ ms sql instance)
- DB name   : SharePoint_Config
- username  : mydomain\spfarm
- password  : p@ssw0rd
- pass phrase : p@ssw0rd
- choose ntlm
*ntlm is a suite of Microsoft security protocols that provides authentication, integrity,
and confidentiality to users
*kerberos is a computer network authentication protocol which works on the basis of "tickets"

#While Kerberos has replaced NTLM as the default authentication protocol in an Active Directory
based single sign-on scheme, NTLM is still widely used in situations where a domain controller
is not available or is unreachable. For example, NTLM would be used if a client is not Kerberos capable,
the server is not joined to a domain, or the user is remotely authenticating over the web

-Finish
-IE will open sharepoint. insert username : mydomain\administrator
  password : p@ssword

- click no

- cancel

- Central Administration will open.

6. IIS manager review
-open IIS MAnager
-expand DOMAIN
-application pools --> sharepoint central admin : (user-spfarm)
-expand site - this is where central administration site located.

7. Sharepoint create site wizard.
-configutration wizard --> farm configuration --> start the wizard
-user existing managed account
-untick business data connectivity service --> next
-insert title & description -> ok --> Finish


8. review Application Management
-manage web applications --> (sharepoint - 80)
-IE new tab -> type at IE address. http://dcpc/

9. http://dcpc/ Grant Permission
-Grant Permission --> users/groups: domain users --> click 'tick icon'
-ok

10. application management - create new site
-New
-IIS web site Name : SharePoint - IT
      port : 80
host Header : it.mydomain.local
Database Name : WSS_Content_IT
-OK

11. DNS
-start -> admin tools -> DNS manager
-domain -> forward lookup zone : right click new host (A or AAAAA)
-Name : it
-IP address : 192.168.1.11 (server IP)
- OK -> DONE
- revirew the create 'it'
- CLOSE

12. Central administration
- Create SIte collection
- web application --> change web application --> share point - IT
- Title : Information Technology Site
- Description : This is a site for IT People
- Primary SIte COllection admin User name : mydomain\administrator
- secondary site collection admin user name : mydomain\spfarm
- OK
- right click it.mydomain.local open in new tab
- OK

13. it.mydomain.local (cannot access)
- start - run - regedit
- HKEY_LOCAL_MACHINE --> SYSTEM --> cuRRENTCONTROLSET --> control --> lsa
- MSV_1_0 (right click ) --> new --> Multi-String value
- rename file (BackConnectionHOstNames)
- right click BackConnectionHOstNames --> modify
- value data :
it
it.mydomain.local

-log in as administrator --> run --> cmd --> iisreset
-log in as SPADMIN
-try again http://it.mydomain.local/

-click site action drop down button
 (if is not available add the site to trusted website)

14. it.mydomain.local Grant Permission
-this is where permission can be selected viewer/full/read/contribute


15. sharepoint - IT site review in IIS
-open IIS MAnager
-expand DOMAIN
-expand site - look for sharepoint - IT site.

16. New Site with specific path (sub SITE)
- Central Admin --> web application management
- SharePoint - IT --> Managed Paths

- Type : Explicit inclusion
- Path : home
- check URL
- Add Path

- Type : Wildcard inclusion
- Path : electronics
- check URL
- Add Path
- OK

- central Administration --> Create site collection
- Web application: http//it.mydomain.local/
- Title : ELectronics site
- URL : select /electronics/ & insert etc
- select template: Publishing --> Enterprise Wiki
- Primary ... user name: mydomain\administrator
- secondary.. user name: mydomain\spfarm
- OK


















































sharepoint installation steps


0. Setup windows server 2008 64bit

1. install AD
2. install sharepoint pre requisite
3. install sql express 2008 64bit
4. install sharepoint 2010

cannot connect to database master at SQL server.xps



Configure a Windows Firewall for Database Engine Access 18 out of 27 rated this helpful - Rate this topic
This topic describes how to configure a Windows firewall for Database Engine access in SQL Server 2012 by using SQL Server Configuration Manager. Firewall systems help prevent unauthorized access to computer resources. To access an instance of the SQL Server Database Engine through a firewall, you must configure the firewall on the computer running SQL Server to allow access. For more information about the default Windows firewall settings, and a description of the TCP ports that affect the Database Engine, Analysis Services, Reporting Services, and Integration Services, see Configure the Windows Firewall to Allow SQL Server Access. There are many firewall systems available. For information specific to your system, see the firewall documentation. The principal steps to allow access are:
1.Configure the Database Engine to use a specific TCP/IP port. The default instance of the Database Engine uses port 1433, but that can be changed. The port used by the Database Engine is listed in the SQL Server error log. Instances of SQL Server Express, SQL Server Compact, and named instances of the Database Engine use dynamic ports. To configure these instances to use a specific port, see Configure a Server to Listen on a Specific TCP Port (SQL Server Configuration Manager).
2.Configure the firewall to allow access to that port for authorized users or computers.
Note The SQL Server Browser service lets users connect to instances of the Database Engine that are not listening on port 1433, without knowing the port number. To use SQL Server Browser, you must open UDP port 1434. To promote the most secure environment, leave the SQL Server Browser service stopped, and configure clients to connect using the port number.
Note By default, Microsoft Windows enables the Windows Firewall, which closes port 1433 to prevent Internet computers from connecting to a default instance of SQL Server on your computer. Connections to the default instance using TCP/IP are not possible unless you reopen port 1433. The basic steps to configure the Windows firewall are provided in the following procedures. For more information, see the Windows documentation.
As an alternative to configuring SQL Server to listen on a fixed port and opening the port, you can list the SQL Server executable (Sqlservr.exe) as an exception to the blocked programs. Use this method when you want to continue to use dynamic ports. Only one instance of SQL Server can be accessed in this way.
In This Topic
Before you begin: Security
To configure a Widows Firewall for Database Engine access, using: SQL Server Configuration Manager Before You Begin Security
Opening ports in your firewall can leave your server exposed to malicious attacks. Make sure that you understand firewall systems before you open ports. For more information, see Security Considerations for a SQL Server Installation Using SQL Server Configuration Manager
Applies to Windows Vista, Windows 7, and Windows Server 2008 The following procedures configure the Windows Firewall by using the Windows Firewall with Advanced Security Microsoft Management Console (MMC) snap-in. The Windows Firewall with Advanced Security only configures the current profile. For more information about the Windows Firewall with Advanced Security, see Configure the Windows Firewall to Allow SQL Server Access
To open a port in the Windows firewall for TCP access
1.On the Start menu, click Run, type WF.msc, and then click OK.
2.In the Windows Firewall with Advanced Security, in the left pane, right-click Inbound Rules, and then click New Rule in the action pane.
3.In the Rule Type dialog box, select Port, and then click Next.
4.In the Protocol and Ports dialog box, select TCP. Select Specific local ports, and then type the port number of the instance of the Database Engine, such as 1433 for the default instance. Click Next.
5.In the Action dialog box, select Allow the connection, and then click Next.
6.In the Profile dialog box, select any profiles that describe the computer connection environment when you want to connect to the Database Engine, and then click Next.
1 of 26/11/2012 12:41 PM
Configure a Windows Firewall for Database Engine Accesshttp://technet.microsoft.com/en-us/library/ms175043.aspx

Community Content
7.In the Name dialog box, type a name and description for this rule, and then click Finish.
To open access to SQL Server when using dynamic ports
1.On the Start menu, click Run, type WF.msc, and then click OK.
2.In the Windows Firewall with Advanced Security, in the left pane, right-click Inbound Rules, and then click New Rule in the action pane.
3.In the Rule Type dialog box, select Program, and then click Next.
4.In the Program dialog box, select This program path. Click Browse, and navigate to the instance of SQL Server that you want to access through the firewall, and then click Open. By default, SQL Server is at C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Binn\Sqlservr.exe. Click Next.
5.In the Action dialog box, select Allow the connection, and then click Next.
6.In the Profile dialog box, select any profiles that describe the computer connection environment when you want to connect to the Database Engine, and then click Next.
7.In the Name dialog box, type a name and description for this rule, and then click Finish.
[Top]
Did you find this helpful? Yes No
© 2012 Microsoft. All rights reserved.

Friday, October 12, 2012

securecoding.cert.org


https://www.securecoding.cert.org/confluence/dashboard.action

 CERTC C C Perl C 

Dashboard

Skip to Recently Updated

Welcome to Confluence

Confluence combines powerful online authoring capabilities, deep Office integration and an extensive plugin catalogue to help people work better together and share information effortlessly.

Secure programmer: Developing secure programs The right mentality is half the battle

http://www.ibm.com/developerworks/linux/library/l-sp1/index.html


Secure programmer: Developing secure programs

The right mentality is half the battle
David Wheeler (dwheelerNOSPAM@dwheeler.com), Research staff member, Institute for Defense Analyses
Summary:  This column explains how to write secure applications; it focuses on the Linux operating system, but many of the principles apply to any system. In today's networked world, software developers must know how to write secure programs, yet this information isn't widely known or taught. This first installment of the Secure programmer column introduces the basic ideas of how to write secure applications and discusses how to identify the security requirements for your specific application. Future installments will focus on different common vulnerabilities and how to prevent them.
Date:  21 Aug 2003
Level:  Intermediate
Also available in:   Korean

Activity:  600 views
Comments:   0 (View | Add comment - Sign in)
Average rating 4 stars based on 23 votes Average rating (23 votes)
Rate this article
It smelled terrible. For over two months, hundreds of thousands of gallons of sewage had been leaking into Australian parks, rivers, and the grounds of a hotel, and no one knew why. Marine plants and animals were dying, and the water in one creek had turned black. On April 23, 2000, police solved the mystery when they arrested a man who had been using a computer and radio to gain complete control over machines governing sewage and drinking water. His motive? Trial evidence suggests he was trying to get a lucrative consulting contract to solve the problems he was causing. It could have been much worse.
A thief identified only as "Maxus" stole 350,000 credit card numbers from the online music company CD Universe and demanded a $100,000 ransom. When CD Universe refused to pay, Maxus publicly posted the numbers -- harming CD Universe's customers and giving them a good reason to shop elsewhere.
The CIA recently learned that Osama bin Laden's al Qaeda terrorist organization has "far more interest" in cyber-terrorism than previously believed. Computers linked to al Qaeda had acquired various computer "cracking" tools, with the intent of inflicting catastrophic harm.
Computer attacks have become a very serious problem. In 1997, the CERT/CC reported 2,134 computer security incidents and 311 distinct vulnerabilities; by 2002 it had risen to 82,094 incidents and 4,129 vulnerabilities. The Computer Security Institute (CSI) and the San Francisco Federal Bureau of Investigation's (FBI) Computer Intrusion Squad surveyed 503 large corporations and government agencies in 2003 and found that 92 percent of the respondents reported attacks. Respondents identified both their Internet connection (78 percent) and their internal systems (36 percent) as frequent points of attack. 75 percent of the respondents acknowledged financial losses, and although only 47 percent could quantify their losses; those who could found it was over $200 million.
There are many reasons why attacks are on the rise. Computers are increasingly networked, making it easier for attackers to attack anyone in the world with very little risk. Computers have become ubiquitous; they now control many more things of value (making them worth attacking). In the past, customers have been quite willing to buy insecure software, so there had been no financial incentive to create secure software.
The electronic world is now a far more dangerous place. Today, nearly all applications need to be secure applications. Practically every Web application needs to be a secure application, for example, because untrusted users can send data to them. Even applications that display or edit local files (such as word processors) have to be secured, because sometimes users will display or edit data e-mailed to them.
If you develop software, you're in a battleground and you need to learn how to defend yourself. Unfortunately, most software developers have never been told how to write secure applications.
This column will help you learn how to write secure applications. This sort of information is rarely taught in schools, or anywhere else for that matter. If you follow this column, you'll be able to protect your programs against the most common attacks being used today. Although the focus is on the Linux operating system (also called GNU/Linux), nearly all of the material applies to any UNIX-like system, and much of it also applies to other operating systems like Microsoft Windows.
For this first article, I'll start with some basics: security terminology, changing your mindset, the impact of Free-Libre/open source software (FLOSS), and identifying security requirements.
Every field has its own terminology, and the computer security field is littered with acronyms and confusing words. These few definitions should help:
  • An attacker (also called a cracker) is someone trying to make a program or computer do what it's specifically not supposed to do, such as breaking into a computer they don't own to obtain or change private data.
  • hacker is a computer expert or enthusiast. Not all attackers are hackers -- some attackers don't know anything about computers. Also, not all hackers are attackers -- many hackers write the programs that defend you! Media companies only concentrate on the hackers who attack computer systems rather than the defenders, so some people use the term "hacker" to mean only attacking hackers. However, if you think all hackers are attackers, you'll have a lot of trouble understanding many security articles, so I'll use the definition shown here.
  • flaw is a mistake in a program or in the way the program has been installed. Not all flaws relate to security.
  • vulnerability is a flaw that makes it possible for a program to fail to meet its security requirements.
  • An exploit is a program that demonstrates or exploits the vulnerability.
The biggest challenge in learning how to write secure software is changing how you think about developing software. Here are a few points that should help:
  • Paranoia is a virtue. Trust nothing until it has earned your trust. Don't assume your input obeys rules you're depending on; check it. Don't ignore error reports from libraries; often, you need to stop processing on an unexpected error. Don't assume that your program is bug free; limit what your program can do, so that bugs are less likely to become security flaws.
  • Normal testing usually won't find security flaws. Most test approaches presume that users are trying to use the program to help them get some work done. Thus, tests will examine how programs work in "average" cases or some maximum values, presuming that users will work in some "random" or "useful" way. In contrast, security flaws often only show up with extremely bizarre values that traditional testing simply wouldn't check. Some developers write very poor code and hope to test it into being correct. That approach simply won't produce secure code, because you can't create enough tests to represent all the odd things an attacker can do.
  • Gadgets (like firewalls) and technologies (like encryption) aren't enough.
  • Identify and learn from past failures. It turns out that nearly all software vulnerabilities are caused by a relatively small set of common mistakes. If you learn what those mistakes are -- and how to avoid them -- your software will be far more secure. In fact, this column will concentrate on how to avoid common past mistakes so that you won't make the same ones.
Free-Libre/open source software programs are those with licenses that give users the freedom to run the program for any purpose, to study and modify the program, and to redistribute copies of either the original or modified program (without having to pay royalties to previous developers). Synonyms include open source software (OSS), "Free Software" (FS) when capitalized, and OSS/FS. "Free Software" and "open source software" can be used interchangeably for the purpose of this article, but "FLOSS" is preferred since it embraces both terms.Typical FLOSS programs are developed by communities of developers, working together and reviewing each other's work. The Linux kernel is FLOSS, as is the Apache Web server and many other programs; FLOSS is becoming increasingly popular in many market niches.
A FLOSS program's source code is available for public review, and there's been a raging controversy about how that affects security. Will FLOSS be more secure because of all the public scrutiny that's possible? Or, will FLOSS be less secure because attackers have more information -- making it easier to create attacks against the program?
The answers are starting to come in, and they're more nuanced and complicated than simple claims like "FLOSS is always more secure." There's certainly evidence that FLOSS can be more secure than proprietary software. For example, the FLOSS OpenBSD operating system has had far fewer vulnerabilities reported than Microsoft Windows. But there's a reasonable counter-claim: since there are more Windows users, perhaps Windows is attacked more often, meaning that Windows vulnerabilities are more likely to be found. It's very doubtful that that's the whole story, but it shows how hard it is to make equal comparisons.
A better example is the Apache Web server: it's far more popular than Microsoft's proprietary IIS Web server, yet Apache has had fewer serious vulnerabilities than IIS. See my paper "Why OSS/FS? Look at the Numbers" (in Resources") for more statistics about FLOSS, including security statistics.
It's also clear that attackers don't really need source code. Just look at all the Microsoft Windows exploits available! More importantly, if attackers needed source code, they could use decompilers, which recreate source code that's good enough for attacking purposes.
But the answer also isn't simply "FLOSS is always more secure." After all, you could change a proprietary program's license to FLOSS without changing its code, and it wouldn't suddenly become more secure. Instead, there are several factors that appear necessary for FLOSS programs to have good security:
  • Multiple people have to actually review the code. All sorts of factors can reduce the likelihood of review, such as being a niche or rarely-used product (where there are few potential reviewers), having few developers, using a rarely-used computer language, or not really being FLOSS (such as a "shared source" license). If every code change is examined by multiple developers, this will usually aid security.
  • At least some of the people developing and reviewing the code must know how to write secure programs. One person can help train others, but you have to start somewhere.
  • Once a vulnerability is found, the repair needs to be developed and distributed quickly.
In short, the most important factor in whether or not a program is secure -- whether it's FLOSS or proprietary -- is whether or not its developers know how to write secure programs.
It's perfectly reasonable to use a FLOSS program if you need a secure program -- but you need to evaluate it in some way to determine if it's secure enough for your purposes.
Before you can determine if a program is secure, you need to determine exactly what its security requirements are. In fact, one of the real problems with security is that security requirements vary from program to program and from circumstance to circumstance. A document viewer or editor (such as a word processor) probably needs to ensure that viewing the data won't make the program run arbitrary commands. A shopping cart needs to make sure that customers can't pick their own prices, that customers can't see information about other customers, and so on.
There's actually an international standard that you can use to formally identify security requirements and determine if they are met. Its formal identifier is ISO/IEC 15408:1999, but everyone refers to it as the "Common Criteria" (CC).
Some contracts specifically require that you use the CC in all its detail, in which case you'll need to know a lot more than I can cover in this article. But for many situations, a very informal and simplified approach is all you need to help you identify your security requirements. So, I'll describe a simplified approach for identifying security requirements, based on the CC:
  • Identify your security environment.
  • Identify your security objectives.
  • Identify your security requirements.
Even if you're doing this informally, write your results down -- they can help you and your users later.
Programs never really work in a vacuum -- a program that is secure in one environment may be insecure in another. Thus, you have to determine what environment (or environments) your program is supposed to work in. In particular, think about:
  • Threats. What are your threats?
    • Who will attack? Potential attackers may include naive users, hobbyists, criminals, disenchanted employees, other insiders, unscrupulous competitors, terrorist organizations, or even foreign governments. Everyone's a target to someone, though some attackers are more dangerous than others. Try to identify who you trust; by definition you shouldn't trust anyone else. It's a good idea to identify who you don't trust, since it will help you figure out what your real problem is. Commercial organizations cannot ignore electronic attacks by terrorists or foreign governments -- national militaries simply can't spend their resources trying to defend you electronically. In the electronic world, all of us are on our own, and each of us has to defend ourselves.
    • How will they attack? Are there any particular kinds of attacks you're worried about, such as attackers impersonating legitimate users? Are there vulnerabilities that have existed in similar programs?
    • What asset are you trying to protect? All information isn't the same -- what are the different kinds of information you're trying to protect, and how (from being read? from being changed?)? Are you worried about theft, destruction, or subtle modification? Think in terms of the assets you're trying to protect and what an attacker might want to do to them.
  • Assumptions. What assumptions do you need to make? For example, is your system protected from physical threats? What is your supporting environment (platforms, network) -- are they benign?
  • Organizational security policies. Are there rules or laws that the program would be expected to obey or implement? For example, a medical system in the U.S. or Europe must (by law) keep certain medical data private.
Once you know what your environment is, you can identify your security objectives, which are basically high-level requirements. Typical security objectives cover areas such as:
  • Confidentiality: the system will prevent unauthorized disclosure of information ("can't read").
  • Integrity: the system will prevent unauthorized changing of information ("can't change").
  • Availability: the system will keep working even when being attacked ("works continuously"). No system can keep up under all possible attacks, but systems can resist many attacks or rapidly return to usefulness after an attack.
  • Authentication: the system will ensure that users are who they say they are.
  • Audit: the system will record important events, to allow later tracking of what happened (for example, to catch or file suit against an attacker).
Usually, as you identify your security objectives, you'll find that there are some things your program just can't do on its own. For example, maybe the operating system you're running on needs hardening, or maybe you depend on some external authentication system. In that case, you need to identify these environmental requirements and make sure you tell your users how to make those requirements a reality. Then you can concentrate on the security requirements of your program.
Once you know your program's security objectives, you can identify the security requirements by filling in more detail. The CC identifies two major kinds of security requirements: assurance requirements and functional requirements. In fact, most of the CC is a list of possible assurance and functional requirements that you can pick and choose from for a given program.
Assurance requirements are processes that you use to make sure that the program does what it's supposed to do -- and nothing else. This might include reviewing program documentation to see that it's self-consistent, testing the security mechanisms to make sure they work as planned, or creating and running penetration tests (tests specifically designed to try to break into a program). The CC has several pre-created sets of assurance requirements, but feel free to use other assurance measures if they help you meet your needs. For example, you might use tools to search for likely security problems in your source code (these are called "source code scanning tools") -- even though that's not a specific assurance requirement in the CC.
Functional requirements are functions that the program performs to implement the security objectives. Perhaps the program checks passwords to authenticate users, or encrypts data to keep it hidden, and so on. Often, only "authorized" users can do certain things -- so think about how the program should determine who's authorized.
So, once you know what your program has to do, is that enough? No. A cursory reading of well-known security vulnerability lists (such as Bugtraq, CERT, or CVE) reveals that today, most security vulnerabilities are caused by a relatively small set of common implementation mistakes. There isn't a single standard terminology for these mistakes, but there are common phrases such as "failing to validate input," "buffer overflow," "race condition," and so on. Unfortunately, most developers have no idea what these common mistakes are, and they repeat the mistakes others have made before them.
Future installments of this column will delve into what these common mistakes are, and more importantly, how to avoid making them. In many cases, the way to avoid the mistake is both subtle and simple -- but if you don't know how to avoid the mistake, you'll probably repeat it.
In this column, I'm usually not going to try to separate different kinds of applications (such as "Web applications," "infrastructure components," "local applications," or "setuid applications"). The reason? Today's applications are increasingly interconnected, made out of many different parts. As a result, it's quite possible that your "single" application may have different parts, each of a different kind! Instead, I think it's better to learn how to develop secure applications in any situation, and then note when specific guidelines apply.
The next installment will start by covering how to validate input. This is trickier than it sounds. For example, we'll see why looking for incorrect input is a mistake, and how attackers can often sneak in illegal negative numbers without using the "-" character. Later topics will include avoiding buffer overflows, minimizing privileges, and avoiding race conditions. It will take a number of articles to cover the common mistakes, but by following this column, you'll be able to avoid the mistakes responsible for nearly all of today's software vulnerabilities.

David Wheeler
David A. Wheeler is an expert in computer security and has long worked in improving development techniques for large and high-risk software systems. He is the author of the book "Secure Programming for Linux and Unix HOWTO" and is a validator for the Common Criteria. David also wrote the article "Why Open Source Software/Free Software? Look at the Numbers!" and the Springer-Verlag book Ada95: The Lovelace Tutorial, and is the co-author and lead editor of the IEEE book Software Inspection: An Industry Best Practice. This developerWorksarticle presents the opinions of the author and does not necessarily represent the position of the Institute for Defense Analyses. You can contact David at dwheelerNOSPAM@dwheeler.com.
Stats