Computer Modern is the family of typefaces used by default by the typesetting program TeX. It was created by Donald Knuth with his METAFONT program, and was most recently updated in 1992. However, the family font was superseded by CM-Super (Computer Modern-super), the latest release dating 2002. The latter was complemented by CM-LGC, which provides support for Latin, Greek and Cyrillic, with the latest release dating 2005. Both CM-Super and CM-LGC are included in TeXLive, a modern TeX distribution.
The Computer Modern typefaces are described in great detail (including full source code) in the book Computer Modern Typefaces, volume E in the Computers and Typesetting series, which is unique in the history of font design: in Knuth's words, they "belong to the class of sets of books that describe precisely their own appearance."
As implied by the name, Computer Modern is a modern font. Modern, or "Didone", fonts have high contrast between thick and thin elements, and their axis of "stress" or thickening is perfectly vertical. Computer Modern, specifically, is based on Monotype Modern 8a, and like its immediate model it has a large x-height relative to the length of ascenders and descenders.
The most unusual characteristic of Computer Modern, however, is the fact that it is a complete type family designed with the METAFONT system. The Computer Modern source files are governed by 62 distinct parameters, controlling the widths and heights of various elements, the presence of serifs or old-style numerals, whether dots such as the dot on the "i" are square or rounded, and the degree of "superness" in the bowls of lowercase letters such as "g" and "o". Computer Modern is by no means the only METAFONT-designed typeface, but it is by far the most mature and widely used.
The advance of printer technology has reduced the need for software rasterizers like METAFONT. Outline fonts (to be rendered by the printer or display system) are now generally preferred. As a result, many users have now migrated to Postscript-based replacements, mostly Type 1 implementation of Computer Modern, maintained by the American Mathematical Society, or Latin Modern, instead of the original METAFONT-based Computer Modern. The Latin Modern implementation, now standard in the TeX community, was made through a METAFONT/MetaPost derivative called METATYPE1.
The Computer Modern typefaces are described in great detail (including full source code) in the book Computer Modern Typefaces, volume E in the Computers and Typesetting series, which is unique in the history of font design: in Knuth's words, they "belong to the class of sets of books that describe precisely their own appearance."
As implied by the name, Computer Modern is a modern font. Modern, or "Didone", fonts have high contrast between thick and thin elements, and their axis of "stress" or thickening is perfectly vertical. Computer Modern, specifically, is based on Monotype Modern 8a, and like its immediate model it has a large x-height relative to the length of ascenders and descenders.
The most unusual characteristic of Computer Modern, however, is the fact that it is a complete type family designed with the METAFONT system. The Computer Modern source files are governed by 62 distinct parameters, controlling the widths and heights of various elements, the presence of serifs or old-style numerals, whether dots such as the dot on the "i" are square or rounded, and the degree of "superness" in the bowls of lowercase letters such as "g" and "o". Computer Modern is by no means the only METAFONT-designed typeface, but it is by far the most mature and widely used.
The advance of printer technology has reduced the need for software rasterizers like METAFONT. Outline fonts (to be rendered by the printer or display system) are now generally preferred. As a result, many users have now migrated to Postscript-based replacements, mostly Type 1 implementation of Computer Modern, maintained by the American Mathematical Society, or Latin Modern, instead of the original METAFONT-based Computer Modern. The Latin Modern implementation, now standard in the TeX community, was made through a METAFONT/MetaPost derivative called METATYPE1.
Key features
Agility improves with users' ability to rapidly and inexpensively re-provision technological infrastructure resources.
Application Programming Interface (API) accessibility to software that enables machines to interact with cloud software in the same way the user interface facilitates interaction between humans and computers. Cloud Computing systems typically use REST based APIs.
Cost is claimed to be greatly reduced and capital expenditure is converted to operational expenditure. This ostensibly lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).
Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
Peak-load capacity increases (users need not engineer for highest possible load-levels)
Utilization and efficiency improvements for systems that are often only 10–20% utilized.
Reliability is improved if multiple redundant sites are used, which makes well designed cloud computing suitable for business continuity and disaster recovery. Nonetheless, many major cloud computing services have suffered outages, and IT and business managers can at times do little when they are affected.
Scalability via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface. One of the most important new methods for overcoming performance bottlenecks for a large class of applications is data parallel programming on a distributed data grid.
Security could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than under traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. Providers typically log accesses, but accessing the audit logs themselves can be difficult or impossible. Furthermore, the complexity of security is greatly increased when data is distributed over a wider area and / or number of devices.
Maintenance of cloud computing applications is easier, since they don't have to be installed on each user's computer. They are easier to support and to improve since the changes reach the clients instantly.
Metering means that cloud computing resources usage should be measurable and should be metered per client and application on a daily, weekly, monthly, and yearly basis.
Application Programming Interface (API) accessibility to software that enables machines to interact with cloud software in the same way the user interface facilitates interaction between humans and computers. Cloud Computing systems typically use REST based APIs.
Cost is claimed to be greatly reduced and capital expenditure is converted to operational expenditure. This ostensibly lowers barriers to entry, as infrastructure is typically provided by a third-party and does not need to be purchased for one-time or infrequent intensive computing tasks. Pricing on a utility computing basis is fine-grained with usage-based options and fewer IT skills are required for implementation (in-house).
Device and location independence enable users to access systems using a web browser regardless of their location or what device they are using (e.g., PC, mobile). As infrastructure is off-site (typically provided by a third-party) and accessed via the Internet, users can connect from anywhere.
Multi-tenancy enables sharing of resources and costs across a large pool of users thus allowing for:
Centralization of infrastructure in locations with lower costs (such as real estate, electricity, etc.)
Peak-load capacity increases (users need not engineer for highest possible load-levels)
Utilization and efficiency improvements for systems that are often only 10–20% utilized.
Reliability is improved if multiple redundant sites are used, which makes well designed cloud computing suitable for business continuity and disaster recovery. Nonetheless, many major cloud computing services have suffered outages, and IT and business managers can at times do little when they are affected.
Scalability via dynamic ("on-demand") provisioning of resources on a fine-grained, self-service basis near real-time, without users having to engineer for peak loads. Performance is monitored, and consistent and loosely coupled architectures are constructed using web services as the system interface. One of the most important new methods for overcoming performance bottlenecks for a large class of applications is data parallel programming on a distributed data grid.
Security could improve due to centralization of data, increased security-focused resources, etc., but concerns can persist about loss of control over certain sensitive data, and the lack of security for stored kernels. Security is often as good as or better than under traditional systems, in part because providers are able to devote resources to solving security issues that many customers cannot afford. Providers typically log accesses, but accessing the audit logs themselves can be difficult or impossible. Furthermore, the complexity of security is greatly increased when data is distributed over a wider area and / or number of devices.
Maintenance of cloud computing applications is easier, since they don't have to be installed on each user's computer. They are easier to support and to improve since the changes reach the clients instantly.
Metering means that cloud computing resources usage should be measurable and should be metered per client and application on a daily, weekly, monthly, and yearly basis.
History of computing hardware
The history of computer hardware encompasses the hardware, its architecture, and its impact on software. The elements of computing hardware have undergone significant improvement over their history. This improvement has triggered worldwide use of the technology, performance has improved and the price has declined.
[1] Computers are accessible to ever-increasing sectors of the world's population.
[2] Computing hardware has become a platform for uses other than computation, such as automation, communication, control, entertainment, and education. Each field in turn has imposed its own requirements on the hardware, which has evolved in response to those requirements.
[3]The von Neumann architecture unifies our current computing hardware implementations.
[4] Since digital computers rely on digital storage, and tend to be limited by the size and speed of memory, the history of computer data storage is tied to the development of computers. The major elements of computing hardware implement abstractions: input,
[5] output,
[6] memory,
[7] and processor. A processor is composed of control
[8] and datapath.
[9] In the von Neumann architecture, control of the datapath is stored in memory. This allowed control to become an automatic process; the datapath could be under software control, perhaps in response to events. Beginning with mechanical datapaths such as the abacus and astrolabe, the hardware first started using analogs for a computation, including water and even air as the analog quantities: analog computers have used lengths, pressures, voltages, and currents to represent the results of calculations.
[10] Eventually the voltages or currents were standardized, and then digitized. Digital computing elements have ranged from mechanical gears, to electromechanical relays, to vacuum tubes, to transistors, and to integrated circuits, all of which are currently implementing the von Neumann architecture.