Increasing Your Sales Through Help Desk Software

hds2Help desk software lets you organize, prioritize and deliver the tasks efficiently. Company websites are actually using this software in order to provide the best services to customers, who want to inquire, complain or give feedback. Any concerns that are customer related can be addressed properly through the software. On the part of the company, help desk provides efficiency at work especially that it assigns the tasks automatically. Instead of manipulating all the information gathered, it will automatically sort the data and allows you to provide specific actions on the problems that arise. The help desk software is also designed in order to work smartly.

The competition in the market is fierce and if a company is not efficient, there is a tendency that they will be overwhelmed by their competitors. Through the help desk, working smartly is feasible and results are more apparent. It would not give a hard time delivering the wants and needs of the customers. Aside from that, company owners can do tasks other than the solving minimal issues. They will be more focused on delivering the best in the market. Help desk software can be purchased and downloaded through the internet and benefits will surely be experienced.

Improving Workflow Through Help Desk Software

Help desk software is often used by businesses because of its numerous advantages. The most common benefit of this software is the improvisation of workflow among business owners and employers. Web developers are providing applications wherein the workflow is automated in order to make the task run smoothly. This means that issues are sent to a specific group of people who are responsible for it. For instance, if the issue is customer related, it will be sent to people who focus on that. If, on the other hand, the problem is on the IT, the department will be alerted to do primary actions about it.

This workflow application of the help desk software will reduce manual errors and somehow increase the efficiency and reliability of the company. Everyone knows that customers are the primary core of the business and whatever needs they have, it should be addressed properly. With the help desk, concerns and other work related tasks can be done easily. It provides a more detailed report and even makes the problems solved effortlessly. Businesses can take advantage on this feature considering that it entails a lot of help to the organization. Help desk software can be purchased and installed from several sites in the internet.

Managing The Company Through Help Desk Software

If you are establishing an online business, the best way to make it effective is through the help desk software. The internet offers wide opportunities but at the same time, the competition is really tough. This is the reason why placing help desk would be a lot of help to increase the chances of gaining more loyal customers. This is not actually a kind of software wherein it draws audiences rather; it is a kind of software wherein it manages the tasks that the customers require. If the customer has complaints or feedbacks, it can be addressed right away because of the help desk software.

There is no need to do manual completion of tasks because the software gathers and sort it accordingly. On the business side, it will promote loyalty and trust on the customers considering that their feedback types are given special attention. A help desk is a good investment and can be beneficial in the long run. If you install it in a particular page of the website, customers will feel valued and they can contact you without hesitation. You simply have to choose the right help desk software that best suits your needs so that it will be beneficial and you can take advantage on it.

The Advantages Of Professional RAID 10 Recovery

RAID 10 recovery has quite a number of advantages in comparison to other RAID levels. A major advantage is that RAID 10 recovery has both a manual and a software procedure. You are free to choose the one that is most convenient for you. The software procedure has four major steps which are used by the professionals and would not take such a long time to get your lost data back. Two, the recovery process is majorly done by professionals. These experts have a high level of knowledge and resources which are needed for a proper RAID 10 data recovery process. They work tirelessly to ensure that all your critical lost data is recovered.

In cases where your disk is too damaged and the experts cannot retrieve the data, a refund is given to you. Server recovery experts typically offer a free evaluation of your disks. With this, they provide a full diagnostic report giving details on what they are able to recover for you to be able to make your decision before signing a contract with them. After you sign a contract with them, the engineers carry out their job on site which offers a very secure environment. The whole process is very fast and does not take more than 5 business days.

Why RAID 10 Is Effective

RAID 10 is a popular RAID configuration that is widely used in enterprise servers and database systems. It provides good level performance and data protection, but it is also prone to various failures such as disk failure, operator errors and controller failure. The recovery can be done either manually or by using software.

For manual RAID 10 Recovery, identifying the cause of array failure is the first step. Then, member disks containing the same data should be excluded, so that you can reconstruct the parameters for a RAID 10. The parameters are start offset, block size, disk order and first disk. Disk editor software that allows you to compare contents on disks can be used to eliminate the excessive disks. After the removal of extra disks, the remaining disks form a RAID 0 array, so you can now follow the RAID 0 recovery procedure. The recovery using software requires downloading the software and connecting the RAID 10 member disks to a PC that has the software. You can then follow the instructions provided to use the software for the recovery. Depending upon the ease and technical knowledge you have, you can use both manual or use software for RAID 10 recovery.

RAID technology is a kind of technology allows the users to have high storage capacities with the ultimate level of redundancies. This is beneficial as compared to the less pricey and reliable disk drive components. RAID 10 recovery is actually a version of the original component and a combination of RAIDs 1 and 0. This is put into use by hardware and not the OS. It used to join disk mirrors purposely for RAID 0 array creations. The device has been proved to work efficiently in input and output uses.

Currently, RAID is associated with data storage schemes for computers. It has the capability to divide or replicate data between a many hard disk drives. The main objectives of the device are to increase input and output levels thus enhancing reliability of data. RAID 10 recovery is about mirrors that make space meant for storage of data.

The device and process involves a number of aspects. The first one is the retrieval capacity of lost data. An irreversible media error can cause data loss in situations where a failed drive is not replaced. The device also provides faster writing speeds making it the first choice for high load data set ups. RAID 10 is also effective since it does work that is meant to do.

Consider This Before Starting Hard Drive Repair

While deciding whether to opt for hard drive repair or complete drive replacement, there are certain factors to consider, such as the lifetime of the hard drive. The lifetime of a hard drive varies according to certain factors such as how often you use it…..running your system for 10 to 12 hours a day or longer will shorten the lifespan considerably, resulting in hard drive repairs needed sooner than usual. If you purchase a low-end computer, your hard drive will be out dated, needing repairs and updating sooner as well. The cheaper machines will have cheaper hard drives, typically not lasting as long as the pricier hard drives. The average lifespan of a hard drive is 5 years. But if you choose to use a desktop computer then the hard drive repairs generally cost half of what a new computer would, so you can extend the lifespan well over the 5 years.

A laptop’s lifespan is usually 3 to 5 years, depending on how powerful the hard drive is when you purchase it. Laptop hard drives are more difficult to repair than a desktop hard drive and in turn, it is more expensive to repair, so opting for an updated model is a better bet.

Of course, if you look after your hard drive, keep the vents clean and do not handle it roughly, then you could keep the hard drive repair professional at bay for up to a decade.

It is a common occurrence in any computer to experience hard drive malfunction. Even if your computer is relatively new, there are some instances that your hard drive could malfunction. This is, however, a manufacture defect and will be readily replaced by the manufacturer since it is still under warranty. What is critical, though, is the information contained in the hard drive. Although it is now easy to perform a hard drive repair, the knowledge that your data could lose is still a stressful ordeal. One thing is critical though in the successful recovery of your data is to stop working on a degraded hard drive. Once you heard clicking sounds in your hard drive immediately shut down its power so as to avoid corrupting your data.

Several options are available to help you out in your hard drive failure. The safest way to be assured of hard drive repair is to find the professional help of an data recovery service expert that has dealt numerous cases of hard disk failure. They also have the necessary equipment and technology to perform this task. On the other hand, if you are knowledgeable on computer hardware, you can perform the hard drive repair yourself and just download the software that will assist you in this task.

The Importance of RAID Recovery

RAID is short for redundant array of inexpensive disks. In storage technology, it is a way of storing data on two or more disk drives while creating a logical storage unit. It increases performance and fault tolerance because of some data redundancy. Nowadays many motherboards are equipped with built-in RAID. However, it is not necessary for personal computers. RAID can duplicate data among hard disk drives, which is more critical in corporate environments.

There are several types of RAID commonly known as RAID levels. Level 0 until Level 6 RAID are distinct to each other by each level’s data placement patterns and degrees of redundancy. RAID can be utilized as either a hardware RAID or software RAID. Generally, few problems are encountered by people that use RAID. However, when problems arise, RAID recovery becomes a major headache.

When there is a RAID failure, RAID recovery is essential. Initially, jot down all the details that you know regarding original array configuration. For hardware configuration, label the disks using a marker. The controller ports and cables must also be labeled properly. You can use numbering style to specify the different disks. Generally, it is important to label the cables, ports, disks and other equipment clearly. RAID recovery requires not only thorough knowledge on the procedure but also adequate skills to ensure complete recovery.

The Detection and Repair Function of RAID Servers

Businesses that have lots of data to store and require a reliable backup system are relying more on RAID arrays. This type of data storage system can be found in advanced computer systems and server environments. They are not only found to be reliable in production setups but in homes as well. IT professionals and average home users must have an efficient RAID server and recovery process in place if ever the data storage is threatened by a number of means. One common problem being encountered with RAID architecture is the failure of the control system. If problems occur in this part of the setup, the data would not be accessible.

To be able to regain access to the data, a compatible controller must be installed or have the original one repaired. Nonetheless, there are times when files have to be accessed immediately and the controls are not available. In this scenario, a RAID recovery tool will help to access the files and they can be stored to a separate hard disk. In addition, the RAID Recovery software can repair the arrays through the detection of its working parameters, classification, and manufacturer. The latest recovery tools are capable of working with more than a couple of RAID arrays. You can also check out RAID drive recovery experts like Hard Drive Recovery Associates.

Data in a RAID array can be lost or be corrupted at any time due to several reasons such as the failure in the main controller program, problems with the hard drive or any other reasons. Array parameters can be lost or the RAID data can be damaged which might affect the proper functioning of the RAID. It is vital that the particular RAID array that had been corrupted must be distinguished and fixed in order to make it function properly. RAID arrays that are broken due to several reasons can be easily recovered using the RAID recovery software within a very short time and this is the method that is widely used nowadays to recover corrupted RAID arrays. Most of the RAID recovery software can detect any type of RAID arrays and fix the problem to recover the damaged or lost data.

If the hardware on the system is crashed and RAID arrays get corrupted, do always beware that running a RAID recovery might bring immense problems. Be sure that the hardware problem is corrected before running the recovery. If the problem in the system seems to be complicated and unpredictable, make sure that you check the main memory of the system.

Just before you consider a RAID recovery expert for RAID 0, it is critical that you pay attention first to the problems that made the RAID drive failure possible. When you are talking about RAID 0 failure, then you need to keep in mind that there are two (2) problems that could have appeared – the failure of one or multiple disks and failure of disks that are not associated with the component disks. If the failure of the RAID can be linked to the component disk then it can be said that the treasured data is already lost since the arrays for 0 are not redundant. There are some files that can be recovered but consider these as smaller files. So here is one realization that you need to keep in mind. If a component disk fails, then it is safe to say good bye to the file.

The other failure can be attributed to a failure of the non-component disk. This means that the problem could be linked to failure of the controller or operator error. If this is the case, RAID recovery is still possible and data can be recovered from RAID 0. This is the main reason why you need to be aware of the problem first before embarking on your RAID recovery efforts.

The standard hard disk drive stores considerable amount of data depending on the unit’s price and application. On the other hand, there is a storage technology that combines a number of disk drives. This system of data distribution to the drives is known as RAID levels. The acronym stands for Redundant Array of Independent Disks, which is a kind of storage virtualization. Like a standard disk drive, RAID arrays can get corrupted in which case a RAID recovery must be employed. The recovery system can be a type of software or a remote data recovery solution. These technologies ensure that the data stored in the array will be recovered even when the server received substantial shock for some reasons.

One great advantage of a RAID recovery company is that they enable users to retrieve and backup the data first before the affected array is fixed. During the recovery process, the files can be stored in a removable storage, another partition, or FTP. These files can be photos, videos, or documents. Using an advanced search algorithm, the RAID recovery achieves results even in conditions wherein a file system is already missing. Amazingly, it also functions even if the array lacks one of the disks.

About 99.9% Uptime – Oracle Style

HewlettPackard will partner with Cisco Systems and Oracle Corp. to develop products for higher-performance Unix servers — products that promise to keep an enterprise network up and running 99,9 percent of the time — from the hardware to the operating system to the database to the application. Conversely, Hp is implementing a similar solution across its NetServer line running Windows NT and cluster server software from Microsoft.

In the collaboration with Cisco and Oracle, the foundation for the new program, called 5nines:5minutes, will be new servers based on the HP-UX operating system. With production expected to begin during this year’s first half, the new systems will run the IA-64 microprocessor, and will be upgradeable via a hardware module to the upcoming Merced microprocessor when it becomes available sometime in late 1999.

True 99.9-percent uptimes, or no more than five-minutes of downtime, won’t happen until the year 2000, said Bill Russell, VP and GM of the enterprise systems group at HP. “We will have 4nines availability in 1999, with thc release of a major new system.” tie said by then, the new systems will have even more hot-swappable capabilities, from the hardware to the operating system components. Also on thc hardware side, Cisco will focus on three levels of network redundancy: the physical, thc logical link and the backbone protocol. Products as a result could be hot-swappable linecards and hot standby routing protocols, said Jayshree Ullal, VP of marketing for the enterprise line business at Cisco. Ill the area of software, Cisco will work to deliver Web-based management tools, and improve upon current software tools. Meanwhile Oracle, will continue to improve upon its Parallel Server technology, which is part of the Oracle8 network computing database,, in the areas of fail-over protection. The promise of a network that is up and running virtually all the time is a difficult one to keep. To meet this challenge, the companies will address certain categories of potential failures. Thc first is avoidance, to prevent downtimes in the first place. HP will use its consulting services to help customers build highly available and integrated infrastructures by reviewing current information technology environments for points of potential failure and implementing appropriate architectures.

The second category, regeneration, will address network recovery, or how quickly a network can refuel itself without disturbing applications. This requires quick, transparent repair of any component in the IT infrastructure, via a hot-swappable model. The third category, inclusion, includes extending uptimes across both hardware and operating systems. Simplicity, the’ fourth category, for eliminating human error and reducing installation complexities. This will be done through software that can manage, monitor and administer an entire information technology environment from a single console; in this case, through HP’s OpenView product. The goal will be to monitor an enterprise from one Web browser user interface that contains drag-and-drop functions within the database management tools for, for example, watching for traffic slowdowns on a network. HP says this new program will change the way vendors will compete. Said Mr. Russell: “Today, suppliers differentiate themselves through products that offer system availability. But as businesses are forced to compete globally, and as the Internet becomes a more common way for companies to do business, availability will be measured in terms of the customers’ perception of the availability.”

Acer Server Targets Lower Cost Small Business Segment

Trailing its successful thrust into the consumer arena, Acer is aiming at the small business market. And though the market promises growth for big players, unless Acer can take on Goliath, it may hit the wall a few times.

The company first tackled that segment of the server market last October by unveiling the Acer Altos 930 Server. This week, the company is releasing a network-ready version of the server with Internet, Web and communications software that sells for under $5,000.

The small business market for LANs and servers will grow from $2.5 billion in 1997 to $4.1 billion in the year 2001, according to the International Data Corp. (IDC). Servers represent the lion’s share of that spending, and most of that will go for PC servers. And as the number of small businesses with networks grow, so will the number of connected PCs.

Last year alone saw a 21 percent increase for unit shipments of PC servers going into small businesses, says IDC. 1997′s PC server revenues jumped 35 percent to exceed $10.5 billion on shipments of 1.75 million units. IDC forecasts PC server shipments to double over the next four years.

“Servers are at the heart of a growing number of small business networks,” said Susan Frankle, director of server research at IDC. “More than half of network users with under 20 employees now have servers. This is a real sea change from the past–a direct result of manufacturer efforts to build a small business franchise.”

Many small businesses are looking to add networks. What is interesting, says Raymond Boggs, director of small business research at IDC, is the number interested in server-based networks, due to issues of performance, Internet access or groupware.

“With the help of computer dealers and resellers, small businesses are assembling networks that are both powerful and affordable,” said Mr. Boggs. “The major LAN and server companies–3Com, Bay Networks, Compaq, Hewlett-Packard, IBM, Intel and Microsoft–are all paying attention to the market and it’s starting to pay off.”

While the top four PC server vendors–Compaq Computer, HP, IBM and Dell Computer–continued to dominate last year, Apple Computer, NCR, Olivetti and Acer lost market share there. That doesn’t bode well for Acer.

The principal difference between the 930 Server and the newly-released plug-and serve’ 930S is the integration of Microsoft’s 4.0 BackOffice Small Business Server suite. The suite, says Acer, is menu-driven, and covers file, print and application services, Internet Connection capabilities and communications systems.

The AcerAltos 930 Server, which uses Intel’s 558 chip, features two Pentium II 233-300MHz processors with 512K ECC cache, synchronous DRAM slots for 512MB ECC memory and an AGP port. It also incorporates a CD-ROM drive, onboard ultrawide SCSI, video functions on a separate port, an integrated 10/100 network interface card and 64-bit controllers.

The server also includes a tape back-up system for quick restoration of data and configurations in case of disaster; and the CPR, which brings the, server back to the same state it was shipped in.

The AcerAltos 930S Server ships at the end of the month through computer resellers. By Acer’s accounts, the server should sell well, since company executives said demand for Altos 930 Servers came in well over forecast. They “can’t build enough,” one Acer executive said.

Acer thinks its strengths in just-in-time manufacturing and the significant success its PCs have seen in the consumer market set it apart from other server manufacturers in the small business market. Its user-friendly, easy-upgrade features in the self-maintaining, pared-down Aspire line have been popular among home users.

Maybe, says Mr. Boggs of IDC, but not enough to eat into the space of market leaders. Acer’s vertical set-up and the bundled nature of this offering may help it dent that fast-growing–but increasingly competitive–market, he said. But its mere top 10 status won’t help it make inroads due to the already entrenched players it will be butting heads with in that space, namely IBM, Compaq and HP, not to mention Dell, which many small businesses may turn to based on similar positive experiences with PCs.

“Acer’s never been exceptional (in the small business market), but they are hoping to rebuild and redefine their position (there),” said Mr. Boggs. “They have a shot, but it’s an environment that’s been inviting a lot of attention, and one that’s going to get increasingly challenging.”

Mr. Boggs called the product “kind of attractive,” but said he’s been seeing similar products from other manufacturers that are priced even lower, Furthermore, resellers usually install software tailored to the customer.

Ultimately, what may hurt Acer is service and support. Unlike some of the bigger players, who have dedicated teams for customer queries, Acer plans on resellers being the first point of contact for repairs and problems. Some small business owners prefer the one-stop-shop, fix-it-all in one place route, but many want to turn to a support structure within the corporation.

“Issues of reliability come up very high on the small business priority list,” Mr. Boggs added. “And networks are challenging because there are so many different components that have to work in harmony.”

What Should I Know About Mac Hard Drive Recovery?

If you want to protect all your valuable data, there is a very simple solution. If you opt for Mac hard drive recovery, you will see a lot of advantages, but at the same time, you could have some disadvantages. The biggest advantage of opting for professional Mac hard drive recovery ( that you will not have to be worried about the hard disk being damaged and lost data being unrecoverable, because you can simply call a company that understands Mac disk failures well. In case that your hard drive has broken down, all you have to do is to bring it to that specialist who knows exactly how to do data recovery. He will help you and you will recover all your data very quickly in most cases. For smaller amounts of data, all you had on the broken hard drive will be copied on a CD or DVD.

Nowadays, recovering data is a very simple job because of the existence of many types of hard disks that come with a lot of options like express recovery. These kinds of hard disks are used especially by companies who can’t afford losing a lot of information because of a hard drive failure. Nevertheless, if you want to find out more information about this topic, there are a lot of resources out there on the web.

Nowadays, access to information is so critical that people guard data like protecting their own lives. Nonetheless, there are instances wherein precious data gets lost for one reason or another. This can happen especially with volumes of data on a hard disk drive where three possible scenarios can materialize. With a Mac hard drive recovery procedure, it is as simple as connecting the hardware to a backup computer. If the hard drive is still functional, a complete data recovery is possible. On the other hand, partial or zero data recovery could happen if the hard drive sustained a high level of physical failure.

For users to undertake an initial troubleshooting procedure, all that is needed is simple observational skills. Average PC users can perform this recovery technique on their own even without the assistance of a technical expert. However, these techniques are not the only answer to hard drive problems. It is only capable of fixing minor issues and the most advanced ones are left to computer software and hardware experts. Ultimately, PC owners and operators must learn the proper way of protecting their equipment to prevent some of the problems associated with hard disk drives. Prevention is always better than dealing with an unknown PC trouble.

It has been established that the technology available today makes it possible to easily repair hard drives a lot more effectively than in the past. In certain cases it can be done by yourself through DIY instructions on the internet. Now we have to ask the question: how cost effective is this process? Would it not be cheaper just to replace the hard drive rather than have the hard drive repaired? Depending on the case, if you as an individual repair the hard drive the answer would be yes. It is cheaper to repair the hard drive by ordering the parts from the internet.

Unfortunately, you would be facing a time delay due to shipping periods. From the time that the order has been received up until the time that you complete the reparations to the hard drive takes up more time. Taking the hard drive to a professional is going to cost you more, due to the fact that they charge for the components used as well as labor.

It would depend on you whether the information on the hard drive is worth it or whether you should buy a new hard drive. There have been cases where there has been a mere $100 difference between the repaired hard drive and the price on a new one.

It is a very stressful experience when your hard drive has crashed. A lot of your work could be put on hold while your hard drive is undergoing restoration. It is especially difficult when there are valuable data stored in your hard drive and recovering it proves to be a challenge. Losing significant data is such a taxing ordeal because not only will it cost you money because of interruptions in your business work flow but also wasted time and effort if the data could not be recovered. All those hard work you put into building up your database only to vanish in one sweep because your hard drive broke down.

Nowadays though, it is not difficult to conduct hard drive repair due to the availability of a number of experienced companies who are adept in the intricacies of repairing a malfunctioned hard disk drive. These people are highly trained in hard drive repair and are frequently updated of any new developments in the hard drive technology and recovery methods. As such, it is now much easier to recover lost data and restore hard drives to its original functional state. The fear of losing important data can be put to rest.

There are always lots of Mac hard drive repair tips at

Unix Vs. NT Battle Transformed The Landscape

n the Unix market, the top players in descending order are Sun Microsystems, Hewlett-Packard (HP), Silicon Graphics (SGI), IBM, and Digital Equipment (DEC, which was recently acquired by Compaq), according to IDC.

Sun ranks as the largest supplier of workstations in the Unix market with a 45% market share in 1997. As the single vendor with a Unix-only strategy, Sun also has the most complete line of Unix workstations, from $2500 at the entry level to about $50,000 on the high end.

Sun’s product line is the Ultra series the Ultra 5, 10, 30, 2, 60, and 450, which scales accordingly in power. The Ultra 2 is the only S-Bus workstation in this series. Sun moved to the PCI bus when it introduced the Ultra 30 last July. Then in January of this year, it announced the Ultra 5, 10, and 60 workstations as well as its new high-end 3D graphics, Elite3D. The Ultra 5 and 10 were Sun’s new entry-level products priced to compete directly with NT workstations. These two workstations use the UltraSparcIIi processor, the next iteration of technology developed from Sun’s more powerful UltraSparc-II processor. With the IIi, Sun integrated supporting technologies into one chip, which resulted in a cheaper manufacturing process, allowing Sun to create Unix systems that compete in price with NT.

The other workstations in the Ultra line use the UltraSparc-II chip. In May, Sun announced the availability of a processor upgrade for the Ultra 60, from 300- to 360MHz. It also announced price cuts of as much as 27% on the Ultra 5, 10, and 60. In July, it announced a 333MHz processor for the Ultra 10. It also introduced an entire new system aimed at high-level graphics, the Ultra 450, which can be configured with one to four UltraSparc-II processors and Elite 3D graphics.

The number two Unix workstation vendor, HP, is particularly strong in engineering environments. Its workstation offerings break down into three basic systems to which customers can add a range of graphics subsystems: Visualize B-Class, which is the entry level, Visualize C-Class, which is midrange, and Visualize J-Class, which is high end. Prices for HP’s systems range from $5000 to $40,000.

HP first introduced the lines in June of 1996 with a C-Class workstation, followed by a B-Class system in September of the same year. Currently, the B-Class systems use HP’s PA-RISC 32-bit 7300LC processor at either 132MHz or 180MHz. Since its introduction, the B-Class has been refreshed once in frequency (from 160MHz to 180MHz in September, 1997) and has dropped in price by as much as 50%. There are two offerings in the C-Class line as well, based on 200- and 240MHz versions of the PA-RISC 64-bit 8200 processor. According to HP, the C-Class systems offer about twice the overall system performance of the B-Class. The J-Class came to market in September of 1997 and has just one workstation in its line, the J2240, which uses two of the 240MHz processors.

Recent performance enhancements to the entire line came in June when HP created a free plug-in for its 10.20 HP-UX operating system aimed at improving graphics performance, especially in the C- and J-Class systems. Additionally, it introduced a new version of its operating system, HP-UX V11, which again should increase system performance across the line. On April 1st, HP also dropped prices by a minimum of 25% across the line. Next year, HP plans to incorporate versions of its next-generation 64-bit PA-RISC processor, the PA-8500, into all three lines.

SGI’s NT and Unix Plans

SGI maintains the number-three position in the Unix market, with tremendous expertise in its core markets – manufacturing, animation, and simulation. Last year SGI announced it would introduce an Intel-based Windows NT product in 1998, which is expected to ship sometime this fall. SGI also has committed to Intel’s IA 64 architecture for future systems in addition to its MIPS-based products.

But SGI plans to keep a strong presence in the Unix market as well with its O2 and Octane families, ranging in price from around $5900 to $39,000. SGI introduced the O2, a single-processor machine, two years ago this October to replace the Indy line. The O2 is available with three MIPS processors, the 200MHz R5000, the 225MHz R10000 (announced in August), and the 250MHz R10000 (announced in May). The O2 has one level of graphics that scales with processor speed. In 1999, SGI plans to incorporate the next-generation MIPS processor, the R12000, into the O2 product line. (The R12000 is expected to reach production in the first half of 1999.) Because the O2′s architecture is a synchronous design, SGI expects the R12000′s faster clock speed to increase the O2′s overall system performance.

The Octane, which replaced the Indigo II family back in January of 1996, can be single- or dual-processor and uses either the 225MHz or 250MHz R10000. The graphics for this workstation are scalable, from a one-geometry engine model (the SE) to a two-geometry engine model (the SSE) to a model with two geometry engines plus a texture module (the MXE). In July, SGI announced price reductions on the Octane line of as much as 36%. Future plans include an Octane based on the R12000 as well as the R14000 and enhanced graphics.

As the number-four vendor in the Unix market, IBM continues to have a loyal base of users, especially in the manufacturing arena. Its business has been tied mostly to CAD environments using Catia. IBM’s workstation products include the 43P Models 140 and 240, Model 397, and Model F50. In general, prices range from $5000 to $20,000.

Both the 43P 140 and 240 were introduced in October of 1996 but have been upgraded continually. The 43P 140, which is a single processor system, represents the entry level of IBM’s Unix line, although it could be considered midrange depending on its configuration. IBM upgraded the 43P 140 to a 332MHz PowerPC processor from a 233MHz version last October; the 43P 240 was upgraded from a 200MHz PowerPC to a 233MHz version last April. The Model 397, which is based on the 160MHz Power2 Super Chip and was introduced last October, is optimized for compute-intensive applications that require high floating-point operations, such as analyzing the surface planes of a 3D model. The F50, which is scalable from one 322MHz PowerPC processor to four such processors, was originally designed as a server, according to IBM. But because of customer demand from the CAD industry, IBM added support for 3D graphics in March of this year. By the end of the year, IBM plans to announce upgrades to most of its Unix workstations.

With a strong focus on Windows NT, DEC has receded to the fifth spot in the Unix market, and its Unix business is not likely to grow substantially now that it is part of Compaq. Although Digital has not been a powerhouse in the workstation market in recent years, it has established itself in several areas, including animation and GIS. Expect Compaq to announce a new Alpha-based Unix product by the end of the third quarter.

NT Players

In the Windows NT market, IDC data shows that HP and Compaq are in a virtual tie for the top spot, but Dell is gaining ground rapidly (see’ 1997 NT Workstation Shipments’). The remaining top players include IBM, DEC, and Intergraph respectively.

On the NT side of its business, HP’s products break into three types of workstations: the Kayak XA, which is the low end; the Kayak XU, which is the midrange; and the Kayak XW, which is the high end. Prices range from $1879 to $12,873 as of press time, but HP was going to announce further price reductions in August. Recent product introductions include the HP XA-S, announced in June, which is scalable from one to two Pentium II processors and is HP’s first product to support the Intel 440BX chip set (which provides support for AGP and 100MHz bus). In July, HP announced both XU and XW products based on Intel’s Xeon processor. The XW product was also the first in HP’s NT line to use its fx6 graphics accelerator, which uses six geometry-acceleration chips based on HP’s PA-RISC floating-point processor technology to boost OpenGL performance. HP was also planning to announce 450MHz versions of its XA, XA-S, and XU products.

Compaq’s workstation products were undergoing a facelift at press time. Although the company is still offering its 5100, 6000, and 8000 workstations, it is moving to a three-tiered structure consisting of its Affordable Performance (AP) systems at the low end, Scalable Performance (SP) systems in the midrange, and Extreme Performance (EP) systems on the high end. The first products introduced in this new hierarchy were the AP400, a dual-processor workstation based on the Intel Pentium II processor announced in June, and the AP200, a single-processor Pentium II workstation announced in July. Both systems also use Intel’s 440BX chip set. Compaq announced its first SP product, the SP700, at the end of June. This will be a single- or dual-processor workstation that uses Intel’s Xeon chip and the next-generation of Compaq’s highly parallel system architecture, designed to increase performance via multiple data paths, high-speed data buses, and balanced system resources. For EX products, Compaq will announce an NT Alpha-based workstation some time in the third quarter.

Third-place Dell, which celebrated its one-year anniversary in the workstation market this July, has applied its direct sales model, backed up by a sales force aimed at its largest customers, to workstations with great success. It has consistently driven prices down, offering products that range in price from $2500 (including monitor) to $20,000, and has forced competitors to follow. However, Dell does not provide the consulting services that Compaq, HP, and IBM provide. As in the general-purpose market, its strength is quality systems at low prices. Dell recently launched two new systems in its Precision WorkStation line: in April it announced the Precision 410, which supports single or dual 350MHz or 400MHz Pentium II processors and the 440BX chipset; then in June Deli introduced the Precision 610, which supports the Xeon processor and comes in single- or dual-processor configurations.

With the addition of its IntelliStation line of Intel-based workstations, IBM has expanded beyond its CAD base. IBM offers three levels of IntelliStations, ranging from a low of $2300 to as high as $15,000: the entry-level EPro, the midrange MPro, and the high-end ZPro. The EPro, which was announced in the first quarter of this year, is a single-processor system based on the Pentium II processor. The MPro, introduced in the second quarter, also uses the Pentium II but can be configured with two processors. The ZPro, which IBM announced this quarter, will use the Xeon processor and come in either single or dual configurations. A number of third-party 3D graphics accelerators are available for all three lines.

The smaller players in the NT workstation market cannot compete with the large PC vendors across the entire market and therefore are trying to open up new markets. Intergraph, for example, is particularly strong in 3D graphics, which the company recently reinforced by introducing its Wildcat 3D technology, a scalable 3D architecture for Windows NT systems. Intergraph has become a leader in focusing on nontraditional markets, such as digital-content creation, especially in broadcast and publishing.

Intergraph offers two basic series of workstations, the TD, which is aimed at the entry level, and the TDZ-2000, which can range from mid to high level depending on configuration. In May, Intergraph announced the most recent addition to its TD line, the TD-250, which uses a 333MHz Pentium II and starts as low as $1500. Intergraph also introduced the TDZ-2000 GL2 and GT1 in May, both of which come in a single or dual 400MHz Pentium II configurations. The GT1 is Intergraph’s first workstation to feature its concurrent multiport architecture, designed to deal with I/O and memory subsystem bandwidth barriers and improve total system performance. Intergraph has announced support for the Xeon as well, but it did not have any more details at press time.

Although the ‘other’ category of NT workstation vendors is quite large coming in at 36% in 1997, IDC predicts that percentage will shrink over time as the major PC vendors take over the Windows NT workstation market, much as they have in the commercial PC market.

Overall, there’s no denying that workstations and PCs are coming closer together. The compelling economics of PCs are forcing Unix vendors to deliver products faster and at lower prices while driving innovation in 3D technology. The competition is already fierce and will continue to be so. The race continues.

Choosing A Server Platform

Nik Silver, director of professional services with Web agency Hyperlink, says, “The Intel platform can be very quick to develop on, because of the tools and because it can be easier to integrate your Web applications with databases and applications you already have, especially ones developed using Microsoft tools. Unix boxes may be less user-friendly, but they offer advantages in terms of scalability and supplier independence.”


As Silver adds. that’s not to say you can’t achieve scalability on Intel platforms: Microsoft’s own Website demonstrates that But it may be more demanding technically.

If you decide to go or Unix, there are further choices to be made. Silver says, “Many buyers will go for commercial suppliers such as Sun or Silicon Graphics, vendors with a great deal of experience in this area and with excellent back-up and support. But again, a surprising number will choose the free Linux platform. It doesn’t have corporate backing but there’s a lot of information on the Internet and it, too, has good tools in areas like security.”

Whatever server architecture you go for, there’s a variety of ways to use it. “Some of the best, most scalable solutions in terms quantity of information and manageability are those that use a three-tier architecture,” says Silver.

Developments in load a balancing software mean that it may be better to put your information on a number of smaller servers rather than one large one, says Adam Twiss, director of server software developer Zeus Technology. whose achievements range from Global One’s site to the UK Godzilla site.

“Hardware suppliers may encourage you to buy the biggest server you can, but it’s often cheaper and more effective to have several smaller machines,” he says.

Twiss claims suppliers of Unix boxes favour larger servers because they are more profitable – they’re not competing with NT workstations the way they are at the lower end of the range. “Further up the range, they can pretty well add an extra zero to the price,” he says.

Ideally you should choose a Web server architecture that will support not only what you want to do today but what you’re going to be doing in a couple of years’ time. This should be the case even if your initial experiments with Web technology are just that – experiments.

One of the most important questions to ask is who the site is intended for. Some sites that begin life on the corporate intranet are later thrown open to the public on the Internet. This possibility should be thought about from the start. Twiss says, “If the server’s a corporate local area network environment you know how many users you’ve got and the network’s fast. On the Internet, you have to think about what happens if all 15 million users start trying to access your site simultaneously with slow modems.”

It’s also important to consider the type of functions you’re likely to want in the future. The hardware requirements are vastly different for a reference site with essentially static data compared to an interactive site where customers are placing orders and can look up stock levels on your live system.


If you don’t know how the site will be used, building or buying a prototype is a sensible way of finding out more about what you can do with Web technology. But some users treat the prototype as a throwaway rather than try to adapt it once they’ve learned more about what they want.

Remember that although Internet technology is meant to be open, some architectures may make your options more proprietary. Silver says. “If you’re not careful, it’s easy to find yourself using specific features of your development platform. That may leave you with a need to rebuild a significant amount if you move to another one.

“It’s difficult to find a suite of software that runs equally well on both Unix and Intel platforms, although individual products do.

“We’ve addressed that problem by building an object library that abstracts the function from the platform,” he says. We built one service on Solaris and wanted to reuse some of the functionality for an Intel platform. It wasn’t a trivial task, but the fact that we’d used the object library technique made it much easier.”

Boring as it may be, sticking with what you’ve got can be a rational solution as well as the line of least resistance.

“A lot of people want to integrate their Web system into something else,” points out Twiss. “If the something else is an Oracle database, say, then it might be best to use the same type of hardware that’s running the Oracle database. If you’re mostly Microsoft. it may be better to go with Windows-based architecture for your Web server.

Not only does this avoid the problems of integrating two different platforms, but you’ll have access to some of the skills you need.

In practice, many organisations consult professional advisers or suppliers. If you are asking an Internet service provider to host your Web site, then the choice of platform is usually the provider’s. If you go to an agency like Hyperlink, it will work with you to choose the right platform for your needs.

You have the choice of managing it or paying the agency to do it for you. Consultancies also exist to help you make the choice; as always, it’s important to check how independent the advice is.

Questions to ask

* Who is in the audience and how many people are in it?

* What are the upgrade paths if the site is used more than anticipated?

* How serious are the consequences if the server fails?

* How much will it cost to add resilience if the server becomes more important to the business?

* Do your staff have the skills to run the server in-house?

* Will the server interface easily with your existing systems if required to do so?

* Is this site purely for use on a corporate intranet, or will you want to go public in the future?

* How easy will it be to manage the server remotely, if that’s the intention?

* What are the pros and cons of splitting the application over several servers?

Bookseller sticks with Sun servers

Blackwell’s recently revamped its Online Bookshop with different software, but decided to stick with Sun hardware. Herbert Kim, general manager of the Online Bookshop, says, “Given that we already have a Sun shop and that the software we wanted to use was compatible with Sun, it made sense to stick with it.

“Having made that decision, we’re happy with it, and have no major problems with Sun’s hardware or software,” he says.