store.theartofservice.com/the-performance-toolkit.html
Performance
Bloomberg L.P. Company performance
In 2009, Bloomberg L.P. services accounted for a third of the $16 billion global financial data market. At this time, the company had sold 315,000 terminals worldwide. Moreover, the company brought in nearly $7 billion in annual revenue, with 85 percent coming from terminal sales. In 2010, Bloomberg L.P.’s market share stood at 30.3 percent, compared with 25.1 percent in 2005. In 2011, the company had 15,000 employees in 192 locations around the world.
Advanced Encryption Standard Performance
High speed and low RAM requirements were criteria of the AES selection process. Thus AES performs well on a wide variety of hardware, from 8-bit smart cards to high-performance computers.
Advanced Encryption Standard Performance
On a Pentium Pro, AES encryption requires 18 clock cycles per byte, equivalent to a throughput of about 11 MB/s for a 200 MHz processor. On a 1.7 GHz Pentium M throughput is about 60 MB/s.
Advanced Encryption Standard Performance
On Intel Core i3/i5/i7 CPUs supporting AES-NI instruction set extensions, throughput can be over 700 MB/s per thread.
Microsoft Office 2007 PerformancePoint Server 2007
Microsoft PerformancePoint Server allows users to monitor, analyze, and plan their business as well as drive alignment, accountability, and actionable insight across the entire organization. It includes features for scorecards, dashboards, reporting, analytics, budgeting and forecasting, among others.
.NET Framework Performance
The garbage collector, which is integrated into the environment, can introduce unanticipated delays of execution over which the developer has little direct control, and it can cause runtime memory size to be larger than expected.[according to whom?] “In large applications, the number of objects that the garbage collector needs to deal with can become very large, which means it can take a very long time to visit and rearrange all of them.”
.NET Framework Performance
The .NET Framework currently does not provide support for calling Streaming SIMD Extensions (SSE) via managed code
Explicit Congestion Notification Effects on performance
Since ECN is only effective in combination with an Active Queue Management (AQM) policy, the benefits of ECN depend on the precise AQM being used. A few observations, however, appear to hold across different AQMs.
Explicit Congestion Notification Effects on performance
As expected, ECN reduces the number of packets dropped by a TCP connection, which, by avoiding a retransmission, reduces latency and especially jitter. This effect is most drastic when the TCP connection has a single outstanding segment, when it is able to avoid an RTO timeout; this is often the case for interactive connections (such as remote logins) and transactional protocols (such as HTTP requests, the conversational phase of SMTP, or SQL requests).
Explicit Congestion Notification Effects on performance
Effects of ECN on bulk throughput are less clear because modern TCP are fairly good at resending dropped segments in a timely manner when the sender’s window is large.
Explicit Congestion Notification Effects on performance
Use of ECN has been found to be detrimental to performance on highly congested networks when using AQM algorithms that never drop packets. Modern AQM avoid this pitfall by dropping rather than marking packets at very high load.
Burroughs large systems Stack speed and performance
Some of the detractors of the B5000 architecture believed that stack architecture was inherently slow compared to register-based architectures
Burroughs large systems Stack speed and performance
Thus the designers of the current successors to the B5000 systems can optimize in whatever is the latest technique, and programmers do not have to adjust their code for it to run faster – they do not even need to recompile, thus protecting software investment. Some programs have been known to run for years over many processor upgrades. Such speed up is limited on register-based machines.
Burroughs large systems Stack speed and performance
Another point for speed as promoted by the RISC designers was that processor speed is considerably faster if everything is on a single chip. It was a valid point in the 1970s when more complex architectures such as the B5000 required too many transistors to fit on a single chip. However, this is not the case today and every B5000 successor machine now fits on a single chip as well as the performance support techniques such as caches and instruction pipelines.
Burroughs large systems Stack speed and performance
In fact, the A Series line of B5000 successors included the first single chip mainframe, the Micro-A of the late 1980s. This “mainframe” chip (named SCAMP for Single-Chip A-series Mainframe Processor) sat on an Intel-based plug-in PC board.
Comcast Financial performance
The book value of the company nearly doubled from $8.19 a share in 1999 to $15 a share in 2009. Revenues grew sixfold from 1999’s $6 billion to almost $36 billion in 2009. Net profit margin rose from 4.2% in 1999 to 8.4% in 2009, with operating margins improving 31 percent and return on equity doubling to 6.7 percent in the same time span. Between 1999 and 2009, return on capital nearly tripled to 7 percent.
Comcast Financial performance
Comcast reported first quarter 2012 profit increases of 30% due to increase in high-speed internet customers. In addition to a 7% rate increase on cable services in 2012, Comcast anticipates a double digit rate increase in 2013.
Java (programming language) Performance
Programs written in Java have a reputation for being slower and requiring more memory than those written in C++
Java (programming language) Performance
Some platforms offer direct hardware support for Java; there are microcontrollers that can run Java in hardware instead of a software Java virtual machine, and ARM based processors can have hardware support for executing Java bytecode through their Jazelle option.
Call stack Performance analysis
Taking regular-time samples of the call stack can be very useful in profiling the performance of programs. The reason is if a subroutine’s pointer appears on the call stack sampling data many times, It is likely a code bottleneck and should be inspected for performance problems.[dubious – discuss] See Performance analysis and Deep sampling.
Computer performance
Computer performance is characterized by the amount of useful work accomplished by a computer system compared to the time and resources used.
Computer performance
Depending on the context, good computer performance may involve one or more of the following:
Computer performance
High throughput (rate of processing work)
Computer performance
Low utilization of computing resource(s)
Computer performance
High availability of the computing system or application
Computer performance Performance metrics
Computer performance metrics include availability, response time, channel capacity, latency, completion time, service time, bandwidth, throughput, relative efficiency, scalability, performance per watt, compression ratio, instruction path length and speed up. CPU benchmarks are available.
Computer performance Aspect of software quality
Computer software performance, particularly software application response time, is an aspect of software quality that is important in human–computer interactions.
Computer performance Technical and non-technical definitions
The performance of any computer system can be evaluated in measurable, technical terms, using one or more of the metrics listed above. This way the performance can be
Computer performance Technical and non-technical definitions
– defined in absolute terms, e.g. for fulfilling a contractual obligation
Computer performance Technical and non-technical definitions
Whilst the above definition relates to a scientific, technical approach, the following definition given by Arnold Allen would be useful for a non-technical audience:
Computer performance Technical and non-technical definitions
The word performance in computer performance means the same thing that performance means in other contexts, that is, it means “How well is the computer doing the work it is supposed to do?”
Computer performance Technical performance metrics
There are a wide variety of technical performance metrics that indirectly affect overall computer performance.
Computer performance Technical performance metrics
Because there are too many programs to test a CPU’s speed on all of them, benchmarks were developed. The most famous benchmarks are the SPECint and SPECfp benchmarks developed by Standard Performance Evaluation Corporation and the ConsumerMark benchmark developed by the Embedded Microprocessor Benchmark Consortium EEMBC.
Computer performance Technical performance metrics
Some important measurements include:
Computer performance Technical performance metrics
Instructions per second – Most consumers pick a computer architecture (normally Intel IA32 architecture) to be able to run a large base of pre-existing, pre-compiled software. Being relatively uninformed on computer benchmarks, some of them pick a particular CPU based on operating frequency (see megahertz myth).
Computer performance Technical performance metrics
FLOPS – The number of floating-point operations per second is often important in selecting computers for scientific computations.
Computer performance Technical performance metrics
Performance per watt – System designers building parallel computers, such as Google, pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself.
Computer performance Technical performance metrics
Some system designers building parallel computers pick CPUs based on the speed per dollar.
Computer performance Technical performance metrics
System designers building real-time computing systems want to guarantee worst-case response. That is easier to do when the CPU has low interrupt latency and when it has deterministic response. (DSP)
Computer performance Technical performance metrics
Computer programmers who program directly in assembly language want a CPU to support a full-featured instruction set.
Computer performance Technical performance metrics
Low power – For systems with limited power sources (e.g. solar, batteries, human power).
Computer performance Technical performance metrics
Environmental impact – Minimizing environmental impact of computers during manufacturing and recycling as well as during use. Reducing waste, reducing hazardous materials. (see Green computing).
Computer performance Technical performance metrics
Giga-updates per second – a measure of how frequently the RAM can be updated
Computer performance Technical performance metrics
However, sometimes pushing one technical performance metric to an extreme leads to a CPU with worse overall performance, because other important technical performance metrics were sacrificed to get one impressive-looking number—for example, the megahertz myth.
Computer performance Performance Equation
The total amount of time (t) required to execute a particular benchmark program is
Computer performance Performance Equation
, or equivalently
Computer performance Performance Equation
P = 1/t is “the performance” in terms of time-to-execute
Computer performance Performance Equation
N is the number of instructions actually executed (the instruction path length)
Computer performance Performance Equation
C= is the average cycles per instruction (CPI) for this benchmark.
Computer performance Performance Equation
I= is the average instructions per cycle (IPC) for this benchmark.
Computer performance Performance Equation
Even on one machine, a different compiler or the same compiler with different compiler optimization switches can change N and CPI—the benchmark executes faster if the new compiler can improve N or C without making the other worse, but often there is a trade-off between them—is it better, for example, to use a few complicated instructions that take a long time to execute, or to use instructions that execute very quickly, although it takes more of them to execute the benchmark?
Computer performance Performance Equation
For a given instruction set (and therefore fixed N) and semiconductor process, the maximum single-thread performance (1/t) requires a balance between brainiac techniques and speedracer techniques.
Computer monitor Measurements of performance
The performance of a monitor is measured by the following parameters:
Computer monitor Measurements of performance
Luminance is measured in candelas per square meter (cd/m2 also called a Nit).
Computer monitor Measurements of performance
Aspect ratio is the ratio of the horizontal length to the vertical length. Monitors usually have the aspect ratio 4:3, 5:4, 16:10 or 16:9.
Computer monitor Measurements of performance
Viewable image is usually measured diagonally, but the actual widths and heights are more informative since they are not affected by the aspect ratio in the same way. For CRTs, the viewable is typically 1 in (25 mm) smaller than the tube itself.
Computer monitor Measurements of performance
Display resolution is the number of distinct pixels in each dimension that can be displayed. Maximum resolution is limited by dot pitch.
Computer monitor Measurements of performance
Dot pitch is the distance between subpixels of the same color in millimeters. In general, the smaller the dot pitch, the sharper the picture will appear.
Computer monitor Measurements of performance
Refresh rate is the number of times in a second that a display is illuminated. Maximum refresh rate is limited by response time.
Computer monitor Measurements of performance
Response time is the time a pixel in a monitor takes to go from active (white) to inactive (black) and back to active (white) again, measured in milliseconds. Lower numbers mean faster transitions and therefore fewer visible image artifacts.
Computer monitor Measurements of performance
Contrast ratio is the ratio of the luminosity of the brightest color (white) to that of the darkest color (black) that the monitor is capable of producing.
Computer monitor Measurements of performance
Power consumption is measured in watts.
Computer monitor Measurements of performance
Delta-E: Color accuracy is measured in delta-E; the lower the delta-E, the more accurate the color representation. A delta-E of below 1 is imperceptible to the human eye. Delta-Es of 2 to 4 are considered good and require a sensitive eye to spot the difference.
Computer monitor Measurements of performance
Viewing angle is the maximum angle at which images on the monitor can be viewed, without excessive degradation to the image. It is measured in degrees horizontally and vertically.
Full text search Performance improvements
The deficiencies of free text searching have been addressed in two ways: By providing users with tools that enable them to express their search questions more precisely, and by developing new search algorithms that improve retrieval precision.
BT Group Financial performance
BT’s financial results have been as follows:
BT Group Financial performance
Year ending Turnover (£m) Profit/(loss) before tax (£m) Net profit/(loss) (£m) Basic eps (p)
Belarus Performances
The Belarusian government sponsors annual cultural festivals such as the Slavianski Bazaar in Vitebsk, which showcases Belarusian performers, artists, writers, musicians, and actors. Several state holidays, such as Independence Day and Victory Day, draw big crowds and often include displays such as fireworks and military parades, especially in Vitebsk and Minsk. The government’s Ministry of Culture finances events promoting Belarusian arts and culture both inside and outside the country.
Defragmentation User and performance issues
Improvements in modern Hard Drives such as RAM cache, faster platter rotation speed, command queuing (SCSI TCQ/SATA NCQ), and greater data density reduce the negative impact of fragmentation on system performance to some degree, though increases in commonly used data quantities offset those benefits
Defragmentation User and performance issues
When reading data from a conventional electromechanical hard disk drive, the disk controller must first position the head, relatively slowly, to the track where a given fragment resides, and then wait while the disk platter rotates until the fragment reaches the head.
Defragmentation User and performance issues
Since disks based on flash memory have no moving parts, random access of a fragment does not suffer this delay, making defragmentation to optimize access speed unnecessary. Furthermore, since flash memory can be written to only a limited number of times before it fails, defragmentation is actually detrimental (except in the mitigation of catastrophic failure).
Apple DOS Performance improvements and other versions
This was called “blowing a rev” and was a well-understood performance bottleneck in disk systems
Apple DOS Performance improvements and other versions
When reading and decoding sector 0, then, sector 8 would pass by, so that sector 1, the next sector likely to be needed, would be available without waiting
Apple DOS Performance improvements and other versions
Unfortunately, the DOS file manager subverted this efficiency by copying bytes read from or written to a file one at a time between the RWTS buffer and main memory, requiring more time and resulting in DOS constantly blowing revs when reading or writing files
Apple DOS Performance improvements and other versions
This functionality soon appeared in commercial products, such as Pronto-DOS, Diversi-DOS, and David-DOS, along with additional features, but was never used in an official Apple DOS release
Leadership Performance
To facilitate successful performance it is important to understand and accurately measure leadership performance.
Leadership Performance
For instance, leadership performance may be used to refer to the career success of the individual leader, performance of the group or organization, or even leader emergence
Supplier relationship management SRM and supplier performance management
Some confusion may exist over the difference between supplier performance management (SPM) and SRM
Cloud computing Performance interference and noisy neighbors
This has also led to difficulties in comparing various cloud providers on cost and performance using traditional benchmarks for service and application performance, as the time period and location in which the benchmark is performed can result in widely varied results.
Productivity Production performance
The performance of production measures production’s ability to generate income
Productivity Production performance
When we want to maximize the production performance we have to maximize the income generated by the production function.
Productivity Production performance
The production performance can be measured as a relative or an absolute income. Expressing performance both in relative (rel.) and absolute (abs.) quantities is helpful for understanding the welfare effects of production. For measurement of the relative production performance, we use the known productivity ratio
Productivity Production performance
Real output / Real input.
Productivity Production performance
The absolute income of performance is obtained by subtracting the real input from the real output as follows:
Productivity Production performance
Real income (abs.) = Real output – Real input
Productivity Production performance
The growth of the real income is the increase of the economic value which can be distributed between the production stakeholders. With the aid of the production model we can perform the relative and absolute accounting in one calculation. Maximizing production performance requires using the absolute measure, i.e. the real income and its derivatives as a criterion of production performance.
Productivity Production performance
The maximum for production performance is the maximum of the real incomes
Productivity Production performance
Figure above is a somewhat exaggerated depiction because the whole production function is shown
Productivity Production performance
Therefore a correct interpretation of a performance change is obtained only by measuring the real income change.
Salesforce.com Sales Performance Accelerator
Salesforce.com is launching a new product called Sales Performance Accelerator. It combines the CRM with the Work.com performance management application as well as customer lead information from Data.com.
Apache HTTP Server Performance
Where compromises in performance need to be made, the design of Apache is to reduce latency and increase throughput, relative to simply handling more requests, thus ensuring consistent and reliable processing of requests within reasonable time-frames.
Apache HTTP Server Performance
This architecture, and the way it was implemented in the Apache 2.4 series, provides for performance equivalent or slightly better than event-based webservers.
C++11 Core language runtime performance enhancements
These language features primarily exist to provide some kind of performance benefit, either of memory or of computational speed.
Proxy server Performance Enhancing Proxies
A proxy that is designed to mitigate specific link related issues or degradations. PEPs (Performance Enhancing Proxies) are typically used to improve TCP performance in the presence of high Round Trip Times (RTTs) and wireless links with high packet loss. They are also frequently used for highly asynchronous links featuring very different upload and download rates.
Dial-up Internet access Performance
Modern dial-up modems typically have a maximum theoretical transfer speed of 56 kbit/s (using the V.90 or V.92 protocol), although in most cases 40–50 kbit/s is the norm. Factors such as phone line noise as well as the quality of the modem itself play a large part in determining connection speeds.
Dial-up Internet access Performance
Some connections may be as low as 20 kbit/s in extremely “noisy” environments, such as in a hotel room where the phone line is shared with many extensions, or in a rural area, many miles from the phone exchange. Other things such as long loops, loading coils, pair gain, electric fences (usually in rural locations), and digital loop carriers can also cripple connections to 20 kbit/s or lower.
Dial-up Internet access Performance
Dial-up connections usually have latency as high as 300 ms or even more; this is longer than for many forms of broadband, such as cable or DSL, but typically less than satellite connections. Longer latency can make online gaming or video conferencing difficult, if not impossible.
Dial-up Internet access Performance
Many modern video games do not even include the option to use dial-up. However, some games such as Everquest, Red Faction, Warcraft 3, Final Fantasy XI, Phantasy Star Online, Guild Wars, Unreal Tournament, Halo: Combat Evolved, Audition, Quake 3: Arena, and Ragnarok Online, are capable of running on 56k dial-up.
Dial-up Internet access Performance
An increasing amount of Internet content such as streaming media will not work at dial-up speeds.
Dial-up Internet access Performance
Analog telephone lines are digitally switched and transported inside a Digital Signal 0 once reaching the telephone company’s equipment. Digital Signal 0 is 64 kbit/s; therefore a 56 kbit/s connection is the highest that will ever be possible with analog phone lines.
Honeywell Performance Materials and Technologies
Andreas Kramvis is the current President and CEO of the Performance Materials and Technologies division.
General Electric Performance evaluations
In performance evaluations, GE executives focus on one’s ability to balance risk and return and deliver long-term results for shareowners.
Database Performance, security, and availability
Because of the critical importance of database technology to the smooth running of an enterprise, database systems include complex mechanisms to deliver the required performance, security, and availability, and allow database administrators to control the use of these features.
Control chart Performance of control charts
When a point falls outside of the limits established for a given control chart, those responsible for the underlying process are expected to determine whether a special cause has occurred
Control chart Performance of control charts
It is known that even when a process is in control (that is, no special causes are present in the system), there is approximately a 0.27% probability of a point exceeding 3-sigma control limits
Control chart Performance of control charts
Meanwhile, if a special cause does occur, it may not be of sufficient magnitude for the chart to produce an immediate alarm condition. If a special cause occurs, one can describe that cause by measuring the change in the mean and/or variance of the process in question. When those changes are quantified, it is possible to determine the out-of-control ARL for the chart.
Control chart Performance of control charts
It turns out that Shewhart charts are quite good at detecting large changes in the process mean or variance, as their out-of-control ARLs are fairly short in these cases
Control chart Performance of control charts
Most control charts work best for numeric data with Gaussian assumptions. The real-time contrasts chart was proposed to monitor process with complex characteristics, e.g. high-dimensional, mix numerical and categorical, missing-valued, non-Gaussian, non-linear relationship.
Audit – Performance audits
Safety, security, information systems performance, and environmental concerns are increasingly the subject of audits. There are now audit professionals who specialize in security audits and information systems audits. With nonprofit organizations and government agencies, there has been an increasing need for performance audits, examining their success in satisfying mission objectives.
Collaborative method – Performance analysis
Working group: a group where no performance need or opportunity exists that requires a team. Members interact to share information but have specific areas of responsibility and little mutual accountability.
Collaborative method – Performance analysis
Pseudo-team: a group where there could be an existing performance need or opportunity that requires a team but there has not been a focus on collective performance. Interactions between members detract from each individual’s contribution.
Collaborative method – Performance analysis
Potential team: a group where a significant performance need exists and attempts are being made to improve performance. This group typically requires more clarity about purpose, goals or outcomes and needs more discipline.
Collaborative method – Performance analysis
Real team: a group with complementary skills, equal commitment and is mutually accountable.
Collaborative method – Performance analysis
Extraordinary team: a real team that also has a deep commitment for one another’s personal growth and success.
Computer architecture – Performance
Modern computer performance is often described in MIPS per MHz (millions of instructions per millions of cycles of clock speed)
Computer architecture – Performance
Counting machine language instructions would be misleading because they can do varying amounts of work in different ISAs. The “instruction” in the standard measurements is not a count of the ISA’s actual machine language instructions, but a historical unit of measurement, usually based on the speed of the VAX computer architecture.
Computer architecture – Performance
Historically, many people measured a computer’s speed by the clock rate (usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have higher performance. As a result manufacturers have moved away from clock speed as a measure of performance.
Computer architecture – Performance
Other factors influence speed, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs being run.
Computer architecture – Performance
In a typical home computer, the simplest, most reliable way to speed performance is usually to add random access memory (RAM). More RAM increases the likelihood that needed data or a program is in RAM—so the system is less likely to need to move memory data from the disk. The disk is often ten thousand times slower than RAM because it has mechanical parts that must move to access its data.
Computer architecture – Performance
There are two main types of speed, latency and throughput. Latency is the time between the start of a process and its completion. Throughput is the amount of work done per unit time. Interrupt latency is the guaranteed maximum response time of the system to an electronic event (e.g. when the disk drive finishes moving some data).
Computer architecture – Performance
Performance is affected by a very wide range of design choices — for example, pipelining a processor usually makes latency worse (slower) but makes throughput better. Computers that control machinery usually need low interrupt latencies. These computers operate in a real-time environment and fail if an operation is not completed in a specified amount of time. For example, computer-controlled anti-lock brakes must begin braking within a predictable, short time after the brake pedal is sensed.
Computer architecture – Performance
The performance of a computer can be measured using other metrics, depending upon its application domain. A system may be CPU bound (as in numerical calculation), I/O bound (as in a webserving application) or memory bound (as in video editing). has become important in servers and portable devices like laptops.
Computer architecture – Performance
Benchmarking tries to take all these factors into account by measuring the time a computer takes to run through a series of test programs
Central processing unit – Performance
Because of these problems, various standardized tests, often called “benchmarks” for this purpose—such as SPECint – have been developed to attempt to measure the real effective performance in commonly used applications.
Central processing unit – Performance
In practice, however, the performance gain is far less, only about 50%, due to imperfect software algorithms and implementation
Interrupt – Performance issues
Interrupts provide low overhead and good latency at low load, but degrade significantly at high interrupt rate unless care is taken to prevent several pathologies
Free and open-source graphics device driver – Performance Comparison
A widely known source for performance information is the free3d.org site, which collects 3D performance information—specifically glxgears frame rates—submitted by users. On the basis of what it concedes is an inadequate benchmark, the site currently lists ATI’s Radeon HD 4670 as recommended for “best 3D performance.” Additionally, Phoronix routinely runs benchmarks comparing free driver performance.
Free and open-source graphics device driver – Performance Comparison
A comparison from April 29, 2013 between the FOSS and the proprietary drivers on both AMD and Nvidia is found here: Phoronix
Computer data storage – Performance
The time it takes to access a particular location in storage. The relevant unit of measurement is typically nanosecond for primary storage, millisecond for secondary storage, and second for tertiary storage. It may make sense to separate read latency and write latency, and in case of sequential access storage, minimum, maximum and average latency.
Computer data storage – Performance
The rate at which information can be read from or written to the storage. In computer data storage, throughput is usually expressed in terms of megabytes per second or MB/s, though bit rate may also be used. As with latency, read rate and write rate may need to be differentiated. Also accessing media sequentially, as opposed to randomly, typically yields maximum throughput.
Computer data storage – Performance
The size of the largest “chunk” of data that can be efficiently accessed as a single unit, e.g. without introducing more latency.
Computer data storage – Performance
The probability of spontaneous bit value change under various conditions, or overall failure rate
Criticism of Linux – Kernel performance
At LinuxCon 2009, Linux creator Linus Torvalds said that the Linux kernel has become “bloated and huge”:
Criticism of Linux – Kernel performance
We’re getting bloated and huge. Yes, it’s a problem … Uh, I’d love to say we have a plan … I mean, sometimes it’s a bit sad that we are definitely not the streamlined, small, hyper-efficient kernel that I envisioned 15 years ago … The kernel is huge and bloated, and our icache footprint is scary. I mean, there is no question about that. And whenever we add a new feature, it only gets worse.
Intel 8086 – Performance
Combined with orthogonalizations of operations versus operand-types and addressing modes, as well as other enhancements, this made the performance gain over the 8080 or 8085 fairly significant, despite cases where the older chips may be faster .
Intel 8086 – Performance
Execution times for typical instructions (in clock cycles)
Intel 8086 – Performance
instruction register-register register immediate register-memory memory-register memory-immediate
Intel 8086 – Performance
jump register => 11 ; label => 15 ; condition,label => 16
Intel 8086 – Performance
integer multiply 70~160 (depending on operand data as well as size) including any EA
Intel 8086 – Performance
integer divide 80~190 (depending on operand data as well as size) including any EA
Intel 8086 – Performance
EA = time to compute effective address, ranging from 5 to 12 cycles.
Intel 8086 – Performance
Timings are best case, depending on prefetch status, instruction alignment, and other factors.
Intel 8086 – Performance
As can be seen from these tables, operations on registers and immediates were fast (between 2 and 4 cycles), while memory-operand instructions and jumps were quite slow; jumps took more cycles than on the simple 8080 and 8085, and the 8088 (used in the IBM PC) was additionally hampered by its narrower bus. The reasons why most memory related instructions were slow were threefold:
Intel 8086 – Performance
Loosely coupled fetch and execution units are efficient for instruction prefetch, but not for jumps and random data access (without special measures).
Intel 8086 – Performance
No dedicated address calculation adder was afforded; the microcode routines had to use the main ALU for this (although there was a dedicated segment + offset adder).
Intel 8086 – Performance
The address and data buses were multiplexed, forcing a slightly longer (33~50%) bus cycle than in typical contemporary 8-bit processors.
Intel 8086 – Performance
However, memory access performance was drastically enhanced with Intel’s next generation chips. The 80186 and 80286 both had dedicated address calculation hardware, saving many cycles, and the 80286 also had separate (non-multiplexed) address and data buses.
Linux desktop environments – Performance
The performance of Linux on the desktop has been a controversial topic, with at least one Linux kernel developer, Con Kolivas, accusing the Linux community of favouring performance on servers. He quit Linux development because he was frustrated with this lack of focus on the desktop, and then gave a ‘tell all’ interview on the topic.
Linux desktop environments – Performance
Other sources, such as mainstream press The Economist disagree with this assessment that there has not been enough focus on desktop Linux, saying in December 2007:
Linux desktop environments – Performance
…Linux has swiftly become popular in small businesses and the home…That’s largely the doing of Gutsy Gibbon, the code-name for the Ubuntu 7.10 from Canonical. Along with distributions such as Linspire, Mint, Xandros, OpenSUSE and gOS, Ubuntu (and its siblings Kubuntu, Edubuntu and Xubuntu) has smoothed most of Linux’s geeky edges while polishing it for the desktop…It’s now simpler to set up and configure than Windows.
Microkernel – Performance
On most mainstream processors, obtaining a service is inherently more expensive in a microkernel-based system than a monolithic system
Microkernel – Performance
L4’s IPC performance is still unbeaten across a range of architectures.
Microkernel – Performance
While these results demonstrate that the poor performance of systems based on first-generation microkernels is not representative for second-generation kernels such as L4, this constitutes no proof that microkernel-based systems can be built with good performance
Microkernel – Performance
An attempt to build a high-performance multiserver Operating System was the IBM Sawmill Linux project
Microkernel – Performance
It has been shown in the meantime that user-level device drivers can come close to the performance of in-kernel drivers even for such high-throughput, high-interrupt devices as Gigabit Ethernet. This seems to imply that high-performance multi-server systems are possible.
MTV Video Music Award – Performances
1984 Rod Stewart, Madonna, Huey Lewis and the News, David Bowie, Tina Turner, ZZ Top, Ray Parker, Jr.
MTV Video Music Award – Performances
1985 Eurythmics, David Ruffin & Eddie Kendrick & Hall & Oates, Tears for Fears, John Cougar Mellencamp, Pat Benatar, Sting, Eddie Murphy
MTV Video Music Award – Performances
1986 Robert Palmer, The Hooters, The Monkees, ‘Til Tuesday, INXS, Van Halen, Mr. Mister, Simply Red, Whitney Houston, Pet Shop Boys, Tina Turner, Genesis
MTV Video Music Award – Performances
1987 Los Lobos, Bryan Adams, The Bangles, Bon Jovi, Crowded House, Madonna, Whitesnake, Whitney Houston, The Cars, David Bowie, Prince, Cyndi Lauper, Run-D.M.C. (feat. Steven Tyler & Joe Perry)
MTV Video Music Award – Performances
1988 Rod Stewart, Jody Watley, Aerosmith, Elton John, Depeche Mode, Crowded House, Michael Jackson, Cher, The Fat Boys (feat. Chubby Checker), Guns N’ Roses, INXS
MTV Video Music Award – Performances
1989 Madonna, Bobby Brown, Def Leppard, Tone-Loc, The Cult, Paula Abdul, Jon Bon Jovi & Richie Sambora, The Cure, Cher, The Rolling Stones, Axl Rose & Tom Petty and the Heartbreakers
MTV Video Music Award – Performances
1991 Van Halen, C+C Music Factory, Poison, Mariah Carey, EMF, Paula Abdul, Queensrÿche, LL Cool J, Metallica, Don Henley, Guns N’ Roses, Prince and The New Power Generation
MTV Video Music Award – Performances
1992 The Black Crowes, Bobby Brown, U2 & Dana Carvey, Def Leppard, Nirvana, Elton John, Pearl Jam, Red Hot Chili Peppers, Michael Jackson, Bryan Adams, En Vogue, Eric Clapton, Guns N’ Roses & Elton John
MTV Video Music Award – Performances
1993 Madonna, Lenny Kravitz (feat. John Paul Jones), Sting, Soul Asylum & Peter Buck & Victoria Williams, Aerosmith, Naughty By Nature, R.E.M., Spin Doctors, Pearl Jam, The Edge, Janet Jackson
MTV Video Music Award – Performances
1994 Aerosmith, Boyz II Men, The Smashing Pumpkins, The Rolling Stones, Green Day, Beastie Boys, Alexandrov Red Army Ensemble & Leningrad Cowboys, Salt-n-Pepa, Tom Petty and the Heartbreakers, Snoop Doggy Dogg, Stone Temple Pilots, Bruce Springsteen
MTV Video Music Award – Performances
1996 The Smashing Pumpkins, The Fugees (feat. Nas), Metallica, LL Cool J, Neil Young, Hootie & the Blowfish, Alanis Morissette, Bush, The Cranberries, Oasis, Bone Thugs-N-Harmony, Kiss
MTV Video Music Award – Performances
1997 Puff Daddy (feat. Faith Evans, 112, Mase & Sting), Jewel, The Prodigy, The Wallflowers (feat. Bruce Springsteen), Lil’ Kim & Da Brat & Missy Elliott & Lisa “Left-Eye” Lopes & Angie Martinez, U2, Beck, Spice Girls, Jamiroquai, Marilyn Manson
MTV Video Music Award – Performances
1998 Madonna, Pras (feat. Ol’ Dirty Bastard, Mýa, Wyclef Jean & Canibus), Hole, Master P (feat. Silkk Tha Shocker, Mystikal & Mia X), Backstreet Boys, Beastie Boys, Brandy & Monica, Dave Matthews Band, Marilyn Manson, Brian Setzer Orchestra
MTV Video Music Award – Performances
1999 Kid Rock (feat. Run-DMC, Steven Tyler, Joe Perry & Joe C.), Lauryn Hill, Backstreet Boys, Ricky Martin, Nine Inch Nails, TLC, Fatboy Slim, Jay-Z (feat. DJ Clue & Amil), Britney Spears & ‘N Sync, Eminem & Dr. Dre & Snoop Dogg
MTV Video Music Award – Performances
2000 Janet Jackson, Rage Against the Machine, Sisqo (feat. Dru Hill), Britney Spears, Eminem, Red Hot Chili Peppers, ‘N Sync, Nelly, Christina Aguilera (feat. Fred Durst), Blink-182
MTV Video Music Award – Performances
2001 Jennifer Lopez (feat. Ja Rule), Linkin Park & The X-Ecutioners, Alicia Keys, ‘N Sync (feat. Michael Jackson), Daphne Aguilera, Jay-Z, Staind, Missy Elliott (feat. Nelly Furtado, Ludacris & Trina), U2, Britney Spears
MTV Video Music Award – Performances
2002 Bruce Springsteen & the E Street Band, Pink, Ja Rule & Ashanti & Nas, Shakira, Eminem, P. Diddy (feat. Busta Rhymes, Ginuwine, Pharrell & Usher), Sheryl Crow, The Hives, The Vines, Justin Timberlake (feat. Clipse), Guns N’ Roses
MTV Video Music Award – Performances
2003 Madonna (feat. Britney Spears, Christina Aguilera & Missy Elliott), Good Charlotte, Christina Aguilera (feat. Redman & Dave Navarro), 50 Cent (feat. Snoop Dogg), Mary J. Blige (feat. Method Man & 50 Cent), Coldplay, Beyoncé (feat. Jay-Z), Metallica
MTV Video Music Award – Performances
2004 Usher, Jet, Hoobastank, Yellowcard, Kanye West (feat. Chaka Khan & Syleena Johnson), Lil Jon & The East Side Boyz, Ying Yang Twins, Petey Pablo, Terror Squad (feat. Fat Joe), Jessica Simpson, Nelly (feat. Christina Aguilera), Alicia Keys (feat. Lenny Kravitz & Stevie Wonder), The Polyphonic Spree, OutKast
MTV Video Music Award – Performances
2005 Green Day, Ludacris (feat. Bobby Valentino), MC Hammer, Shakira (feat. Alejandro Sanz), R. Kelly, The Killers, P. Diddy & Snoop Dogg, Don Omar, Tego Calderón, Daddy Yankee, Coldplay, Kanye West (feat. Jamie Foxx), Mariah Carey (feat. Jadakiss & Jermaine Dupri), 50 Cent (feat. Mobb Deep & Tony Yayo), My Chemical Romance, Kelly Clarkson
MTV Video Music Award – Performances
2006 Justin Timberlake (feat. Timbaland), The Raconteurs, Shakira & Wyclef Jean, Ludacris (feat. Pharrell & Pussycat Dolls), OK Go, The All-American Rejects, Beyoncé, T.I. (feat. Young Dro), Panic! at the Disco, Busta Rhymes, Missy Elliott, Christina Aguilera, Tenacious D, The Killers
MTV Video Music Award – Performances
2007 Britney Spears, Chris Brown (feat. Rihanna), Linkin Park, Alicia Keys, Timbaland (feat. Nelly Furtado, Sebastian, Keri Hilson & Justin Timberlake)
MTV Video Music Award – Performances
2008 Rihanna, Jonas Brothers, Lil Wayne (feat. Leona Lewis & T-Pain), Paramore, Pink, T.I. (feat. Rihanna), Christina Aguilera, Kanye West, Katy Perry, Kid Rock (feat. Lil Wayne), The Ting Tings, LL Cool J, Lupe Fiasco
MTV Video Music Award – Performances
2009 Janet Jackson & This Is It back-up dancers, Katy Perry & Joe Perry, Taylor Swift, Lady Gaga, Green Day, Beyoncé, Muse, Pink, Jay-Z & Alicia Keys
MTV Video Music Award – Performances
2010 Eminem (feat. Rihanna), Justin Bieber, Usher, Florence and the Machine, Taylor Swift, Drake (feat. Mary J. Blige & Swizz Beatz), B.o.B & Paramore (feat. Bruno Mars), Linkin Park, Kanye West
MTV Video Music Award – Performances
2011 Lady Gaga (feat. Brian May), Jay-Z & Kanye West, Pitbull (feat. Ne-Yo & Nayer), Adele, Chris Brown, Beyoncé, Young the Giant, Bruno Mars, Lil Wayne
MTV Video Music Award – Performances
2013 Lady Gaga, Miley Cyrus & Robin Thicke & 2 Chainz & Kendrick Lamar, Kanye West, Justin Timberlake & ‘N Sync, Macklemore & Ryan Lewis (feat. Mary Lambert & Jennifer Hudson), Drake, Bruno Mars, Katy Perry
DEC Alpha – Performance
Perhaps the most obvious trend is that while Intel could always get reasonably close to Alpha in integer performance, in floating point performance the difference was considerable
DEC Alpha – Performance
System CPU MHz integer floating point
DEC Alpha – Performance
SPEC Benchmark 1995 Performance comparison (using SPECint95 and SPECfp95 Result )
DEC Alpha – Performance
Intel Alder System (200 MHz, 256KB L2) Pentium Pro 200 8.9 6.75
DEC Alpha – Performance
2000 Performance comparison (using SPECint95 and SPECfp95 Result)
DEC Alpha – Performance
Intel VC820 motherboard Pentium III 1000 46.8 31.9
Emacs – Performance
Modern computers are powerful enough to run GNU Emacs very quickly, although its performance still lags when handling large files on 32-bit systems
Mach (kernel) – Performance problems
Mach was originally intended to be a replacement for classical monolithic UNIX, and for this reason contained many UNIX-like ideas
Mach (kernel) – Performance problems
Some of Mach’s more esoteric features were also based on this same IPC mechanism
Mach (kernel) – Performance problems
Unfortunately, the use of IPC for almost all tasks turned out to have serious performance impact. Benchmarks on 1997 hardware showed that Mach 3.0-based UNIX single-server implementations were about 50% slower than native UNIX.
Mach (kernel) – Performance problems
Studies showed the vast majority of this performance hit, 73% by one measure, was due to the overhead of the IPC. And this was measured on a system with a single large server providing the operating system; breaking the operating system down further into smaller servers would only make the problem worse. It appeared the goal of a collection-of-servers was simply not possible.
Mach (kernel) – Performance problems
Many attempts were made to improve the performance of Mach and Mach-like microkernels, but by the mid-1990s much of the early intense interest had died. The concept of an operating system based on IPC appeared to be dead, the idea itself flawed.
Mach (kernel) – Performance problems
In fact, further study of the exact nature of the performance problems turned up a number of interesting facts
Mach (kernel) – Performance problems
When Mach 3 attempted to move most of the operating system into user-space, the overhead became higher still: benchmarks between Mach and Ultrix on a MIPS R3000 showed a performance hit as great as 67% on some workloads.
Mach (kernel) – Performance problems
For example, getting the system time involves an IPC call to the user-space server maintaining system clock
Mach (kernel) – Performance problems
Instead they had to use a single one-size-fits-all solution that added to the performance problems
Mach (kernel) – Performance problems
Other performance problems were related to Mach’s support for multiprocessor systems. From the mid-1980s to the early 1990s, commodity CPUs grew in performance at a rate of about 60% a year, but the speed of memory access grew at only 7% a year. This meant that the cost of accessing memory grew tremendously over this period, and since Mach was based on mapping memory around between programs, any “cache miss” made IPC calls slow.
Mach (kernel) – Performance problems
Regardless of the advantages of the Mach approach, these sorts of real-world performance hits were simply not acceptable. As other teams found the same sorts of results, the early Mach enthusiasm quickly disappeared. After a short time many in the development community seemed to conclude that the entire concept of using IPC as the basis of an operating system was inherently flawed.
Kernel (computing) – Performance
Monolithic kernels are designed to have all of their code in the same address space (kernel space), which some developers argue is necessary to increase the performance of the system. Some developers also maintain that monolithic systems are extremely efficient if well-written. The monolithic model tends to be more efficient through the use of shared kernel memory, rather than the slower IPC system of microkernel designs, which is typically based on message passing.
Kernel (computing) – Performance
Studies that empirically measured the performance of these microkernels did not analyze the reasons of such inefficiency
Kernel (computing) – Performance
In fact, as guessed in 1995, the reasons for the poor performance of microkernels might as well have been: (1) an actual inefficiency of the whole microkernel approach, (2) the particular concepts implemented in those microkernels, and (3) the particular implementation of those concepts. Therefore it remained to be studied if the solution to build an efficient microkernel was, unlike previous attempts, to apply the correct construction techniques.
Kernel (computing) – Performance
On the other end, the hierarchical protection domains architecture that leads to the design of a monolithic kernel has a significant performance drawback each time there’s an interaction between different levels of protection (i.e. when a process has to manipulate a data structure both in ‘user mode’ and ‘supervisor mode’), since this requires message copying by value.
Kernel (computing) – Performance
By the mid-1990s, most researchers had abandoned the belief that careful tuning could reduce this overhead dramatically, but recently, newer microkernels, optimized for performance, such as L4 and K42 have addressed these problems.[verification needed]
Extract, transform, load – Performance
ETL vendors benchmark their record-systems at multiple TB (terabytes) per hour (or ~1 GB per second) using powerful servers with multiple CPUs, multiple Hard Drives, multiple gigabit-network connections, and lots of memory. The fastest ETL record is currently held by Syncsort, Vertica and HP at 5.4TB in under an hour, which is more than twice as fast as the earlier record held by Microsoft and Unisys.
Extract, transform, load – Performance
In real life, the slowest part of an ETL process usually occurs in the database load phase. Databases may perform slowly because they have to take care of concurrency, integrity maintenance, and indices. Thus, for better performance, it may make sense to employ:
Extract, transform, load – Performance
Direct Path Extract method or bulk unload whenever is possible (instead of querying the database) to reduce the load on source system while getting high speed extract
Extract, transform, load – Performance
most of the transformation processing outside of the database
Extract, transform, load – Performance
bulk load operations whenever possible.
Extract, transform, load – Performance
Still, even using bulk operations, database access is usually the bottleneck in the ETL process. Some common methods used to increase performance are:
Extract, transform, load – Performance
Partition tables (and indices). Try to keep partitions similar in size (watch for null values that can skew the partitioning).
Extract, transform, load – Performance
Do all validation in the ETL layer before the load. Disable integrity checking (disable constraint …) in the target database tables during the load.
Extract, transform, load – Performance
Disable triggers (disable trigger …) in the target database tables during the load. Simulate their effect as a separate step.
Extract, transform, load – Performance
Use parallel bulk load when possible — works well when the table is partitioned or there are no indices. Note: attempt to do parallel loads into the same table (partition) usually causes locks — if not on the data rows, then on indices.
Extract, transform, load – Performance
If a requirement exists to do insertions, updates, or deletions, find out which rows should be processed in which way in the ETL layer, and then process these three operations in the database separately. You often can do bulk load for inserts, but updates and deletes commonly go through an API (using SQL).
Extract, transform, load – Performance
Whether to do certain operations in the database or outside may involve a trade-off. For example, removing duplicates using distinct may be slow in the database; thus, it makes sense to do it outside. On the other side, if using distinct significantly (x100) decreases the number of rows to be extracted, then it makes sense to remove duplications as early as possible in the database before unloading data.
Extract, transform, load – Performance
A common source of problems in ETL is a big number of dependencies among ETL jobs. For example, job “B” cannot start while job “A” is not finished. One can usually achieve better performance by visualizing all processes on a graph, and trying to reduce the graph making maximum use of parallelism, and making “chains” of consecutive processing as short as possible. Again, partitioning of big tables and of their indices can really help.
Extract, transform, load – Performance
Another common issue occurs when the data is spread between several databases, and processing is done in those databases sequentially. Sometimes database replication may be involved as a method of copying data between databases – and this can significantly slow down the whole process. The common solution is to reduce the processing graph to only three layers:
Extract, transform, load – Performance
This allows processing to take maximum advantage of parallel processing. For example, if you need to load data into two databases, you can run the loads in parallel (instead of loading into 1st – and then replicating into the 2nd).
Extract, transform, load – Performance
Sometimes processing must take place sequentially. For example, dimensional (reference) data is needed before one can get and validate the rows for main “fact” tables.
Surrogate key – Performance
Surrogate keys tend to be a compact data type, such as a four-byte integer. This allows the database to query the single key column faster than it could multiple columns. Furthermore a non-redundant distribution of keys causes the resulting b-tree index to be completely balanced. Surrogate keys are also less expensive to join (fewer columns to compare) than compound keys.
Design by contract – Performance implications
Contract conditions should never be violated during execution of a bug-free program. Contracts are therefore typically only checked in debug mode during software development. Later at release, the contract checks are disabled to maximize performance.
Design by contract – Performance implications
In many programming languages, contracts are implemented with assert. Asserts are by default compiled away in release mode in C/C++, and similarly deactivated in C#/Java. This effectively eliminates the run-time costs of contracts in release.
Performance
Performance measurement is the process of collecting, analyzing and/or reporting information regarding the performance of an individual, group, organization, system or component.
Performance
The means of expressing appreciation can vary by culture. Chinese performers will clap with audience at the end of a performance; the return applause signals “thank you” to the audience. In Japan, folk performing arts performances commonly attract individuals who take photographs, sometimes getting up to the stage and within inches of performer’s faces.
Performance
Sometimes the dividing line between performer and the audience may become blurred, as in the example of “participatory theatre” where audience members get involved in the production.
Performance
Theatrical performances can take place daily or at some other regular interval. Performances can take place at designated performance spaces (such as a theatre or concert hall), or in a non-conventional space, such as a subway station, on the street, or in someone’s home.
Performance – Performance genres
Examples of performance genres include:
Performance – Performance genres
Music performance (a concert or a recital) may take place indoors in a concert hall or outdoors in a field, and may require the audience to remain very quiet, or encourage them to sing and dance along with the music.
Performance – Performance genres
A performance may also describe the way in which an actor performs. In a solo capacity, it may also refer to a mime artist, comedian, conjurer, or other entertainer.
Performance – Live performance event support overview
Live performance events have a long history of using visual scenery, lighting, costume amplification and a shorter history of visual projection and sound amplification reinforcement
Performance – Bibliography
Espartaco Carlos, Eduardo Sanguinetti: The Experience of Limits,(Ediciones de Arte Gaglianone, first published 1989) ISBN 950-9004-98-7.
Performance – Bibliography
Philip V. Bohlman, Marcello Sorce Keller, and Loris Azzaroni (eds.), Musical Anthropology of the Mediterranean: Interpretation, Performance, Identity, Bologna, Edizioni Clueb – Cooperativa Libraria Universitaria Editrice, 2009.
Fast Infoset – Performance
Because Fast Infosets are compressed as part of the XML generation process, they are much faster than using Zip-style compression algorithms on an XML stream, although they can produce slightly larger files.
Fast Infoset – Performance
SAX-type parsing performance of Fast Infoset is also much faster than parsing performance of XML 1.0, even without any Zip-style compression. Typical increases in parsing speed observed for the reference Java implementation are a factor of 10 compared to Java Xerces, and a factor of 4 compared to the Piccolo driver (one of the fastest Java-based XML parsers).
Flashlight – Performance standards
The United States Army former standard MIL-F-3747E described the performance standard for plastic flashlights using two or three D cell dry batteries, in either straight or angle form, and in standard, explosion-proof, heat-resistant, traffic direction, and inspection types. The standard described only incandescent lamp flashlights and was withdrawn in 1996.
Flashlight – Performance standards
In the United States, ANSI in 2009 published FL1 Flashlight basic performance standard
Flashlight – Performance standards
The FL1 standard requires measurements reported on the packaging to be made with the type of batteries packaged with the flashlight, or with an identified type of battery
Flashlight – Performance standards
The working distance is defined as the distance at which the maximum light falling on a surface (illuminance) would fall to 0.25 lux
Flashlight – Performance standards
Run time is measured using the supplied or specified batteries and letting the light run until the intensity of the beam has dropped to 10% of the value 30 seconds after switching on
Flashlight – Performance standards
Impact resistance is measured by dropping the flashlight in six different orientations and observing that it still functions and has no large cracks or breaks in it; the height used in the test is reported
Flashlight – Performance standards
The consumer must decide how well the ANSI test conditions match his requirements, but all manufacturers testing to the FL1 standard can be compared on a uniform basis
Flashlight – Performance standards
ANSI standard FL1 does not specify measurements of the beam width angle but the candela intensity and total lumen ratings can be used by the consumer to assess the beam characteristics
Emirates (airline) – Financial and operational performance
In the financial year 2011–12, Emirates generated revenues of around AED 62 billion, which represented an increase of approximately 15% over the previous year’s revenues of AED 54 billion
Emirates (airline) – Financial and operational performance
As of March 2012, Emirates did not use fuel price hedging. Fuel was 45% of total costs, and may come to $1.7 billion in the year ending 31 March 2012.
Emirates (airline) – Financial and operational performance
In November 2013, Emirates announced its half-year profits, showing a good performance despite high fuel prices and global economic pressure. For the first six months of the fiscal year the revenues reached AED 42.3 billion, an increase of 13% from 2012.
Emirates (airline) – Financial and operational performance
The airline was the seventh-largest airline in the world in terms of international passengers carried, and the largest in the world in terms of scheduled international passenger-kilometers flown. It is also the seventh-largest in terms of scheduled freight tonne-kilometres flown (sixth in scheduled international freight tonne-kilometres flown).
Emirates (airline) – Financial and operational performance
Year Ended Passengers Flown (thousand) Cargo carried (thousand) Turnover (AEDm) Expenditure (AEDm) Net Profit(+)/Loss(-) (AEDm)
Online advertising – Other performance-based compensation
CPA (Cost Per Action or Cost Per Acquisition) or PPP (Pay Per Performance) advertising means the advertiser pays for the number of users who perform a desired activity, such as completing a purchase or filling out a registration form. Performance-based compensation can also incorporate revenue sharing, where publishers earn a percentage of the advertiser’s profits made as a result of the ad. Performance-based compensation shifts the risk of failed advertising onto publishers.:4, 16
Heat sink – Methods to determine performance
This section will discuss the aforementioned methods for the determination of the heat sink thermal performance.
Storage virtualization – Performance and scalability
In some implementations the performance of the physical storage can actually be improved, mainly due to caching
Storage virtualization – Performance and scalability
Due to the nature of virtualization, the mapping of logical to physical requires some processing power and lookup tables. Therefore every implementation will add some small amount of latency.
Storage virtualization – Performance and scalability
In addition to response time concerns, throughput has to be considered. The bandwidth into and out of the meta-data lookup software directly impacts the available system bandwidth. In asymmetric implementations, where the meta-data lookup occurs before the information is read or written, bandwidth is less of a concern as the meta-data are a tiny fraction of the actual I/O size. In-band, symmetric flow through designs are directly limited by their processing power and connectivity bandwidths.
Storage virtualization – Performance and scalability
Most implementations provide some form of scale-out model, where the inclusion of additional software or device instances provides increased scalability and potentially increased bandwidth. The performance and scalability characteristics are directly influenced by the chosen implementation.
Hardware random number generator – Performance test
Hardware random number generators should be constantly monitored for proper operation. RFC 4086, FIPS Pub 140-2 and NIST Special Publication 800-90b include tests which can be used for this. Also see the documentation for the New Zealand cryptographic software library cryptlib.
Hardware random number generator – Performance test
Since many practical designs rely on a hardware source as an input, it will be useful to at least check that the source is still operating
Industrial and organizational psychology – Performance appraisal/management
Performance management may also include documenting and tracking performance information for organization-level evaluation purposes.
Industrial and organizational psychology – Performance appraisal/management
Additionally, the I–O psychologist may consult with the organization on ways to use the performance appraisal information for broader performance management initiatives.
Industrial and organizational psychology – Job performance
Job performance is about behaviors that are within the control of the employee and not about results (effectiveness), the costs involved in achieving results (productivity), the results that can be achieved in a period of time (efficiency), or the value an organization places on a given level of performance, effectiveness, productivity or efficiency (utility).
Industrial and organizational psychology – Job performance
Here, in-role performance was reflected through how well “employees met their performance expectations and performed well at the tasks that made up the employees’ job.” Dimensions regarding how well the employee assists others with their work for the benefit of the group, if the employee voices new ideas for projects or changes to procedure and whether the employee attends functions that help the group composed the extra-role category.
Industrial and organizational psychology – Job performance
These factors include errors in job measurement techniques, acceptance and the justification of poor performance and lack of importance of individual performance.
Industrial and organizational psychology – Job performance
The interplay between these factors show that an employee may, for example, have a low level of declarative knowledge, but may still have a high level of performance if the employee has high levels of procedural knowledge and motivation.
Industrial and organizational psychology – Job performance
Further, an expanding area of research in job performance determinants includes emotional intelligence.
Conscientiousness – Academic and workplace performance
Furthermore, conscientiousness is the only personality trait that correlates with performance across all categories of jobs
Moore’s law – Transistor count versus computing performance
The exponential processor transistor growth predicted by Moore does not always translate into exponentially greater practical CPU performance
Moore’s law – Transistor count versus computing performance
Another source of improved performance is due to microarchitecture techniques exploiting the growth of available transistor count. These increases are empirically described by Pollack’s rule which states that performance increases due to microarchitecture techniques are square root of the number of transistors or the area of a processor.
Moore’s law – Transistor count versus computing performance
Viewed even more broadly, the speed of a system is often limited by factors other than processor speed, such as internal bandwidth and storage speed, and one can judge a system’s overall performance based on factors other than speed, like cost efficiency or electrical efficiency.
Group cohesiveness – Group Performance
Group performance, like exclusive entry, increases the value of group membership to its members and influences members to identify more strongly with the team and to want to be actively associated with it .
Group cohesiveness – Cohesion and Performance
In general, cohesion defined in all these ways was positively related with performance.
Group cohesiveness – Cohesion and Performance
There is some evidence that cohesion may be more strongly related to performance for groups that have highly interdependent roles than for groups in which members are independent.
Group cohesiveness – Cohesion and Performance
In regards to group productivity, having attraction and group pride may not be enough. It is necessary to have task commitment in order to be productive. Furthermore, groups with high performance goals were extremely productive.
Expectancy theory – Expectancy: Effort ? Performance (E?P)
Control is one’s perceived control over performance
Expectancy theory – Instrumentality: Performance ? Outcome (P?O)
Instrumentality is the belief that a person will receive a reward if the performance expectation is met. This reward may come in the form of a pay increase, promotion, recognition or sense of accomplishment. Instrumentality is low when the reward is the same for all performances given.
Expectancy theory – Instrumentality: Performance ? Outcome (P?O)
Instrumentality is increased when formalized policies associate rewards to performance.
Gbridge – Performance
Gbridge claims to establish direct link between computers even if behind NAT by some forms of UDP hole punching. But when the direct link is impossible, it would relay the encrypted data through gbridge server. That would have an impact on its network performance. It is also noticed that the AutoSync traffic are never relayed.
Gbridge – Performance
The LiveBrowse feature works reasonably well for picture heavy folders and for mp3 online play over standard DSL. The flv online play is a little choppy sometimes because the bitrate of most flv files are very close to the uplink speed limit of a standard DSL (300kbit/s).
ISO 14000 – Act – take action to improve performance of EMS based on results
After the checking stage, a management review is conducted to ensure that the objectives of the EMS are being met, the extent to which they are being met, that communications are being appropriately managed and to evaluate changing circumstances, such as legal requirements, in order to make recommendations for further improvement of the system (Standards Australia/Standards New Zealand 2004)
Head-mounted display – Performance parameters
Ability to show stereoscopic imagery
Head-mounted display – Performance parameters
Interpupillary Distance (IPD). This is the distance between the two eyes, measured at the pupils, and is important in designing Head-Mounted Displays.
Head-mounted display – Performance parameters
Field of view (FOV) – Humans have an FOV of around 180°, but most HMDs offer considerably less than this
Head-mounted display – Performance parameters
Resolution – HMDs usually mention either the total number of pixels or the number of pixels per degree
Head-mounted display – Performance parameters
Binocular overlap – measures the area that is common to both eyes
Head-mounted display – Performance parameters
Distant focus (‘Collimation’). Optical techniques may be used to present the images at a distant focus, which seems to improve the realism of images that in the real world would be at a distance.
Head-mounted display – Performance parameters
On-board processing and Operating System. Some HMD vendors offer on-board Operating Systems such as Android, allowing applications to run locally on the HMD and eliminating the need to be tethered to an external device to generate video. These are sometimes referred to as Smart Goggles.
Ferroelectric RAM – Performance
DRAM performance is limited by the rate at which the charge stored in the cells can be drained (for reading) or stored (for writing). In general, this ends up being defined by the capability of the control transistors, the capacitance of the lines carrying power to the cells, and the heat that power generates.
Ferroelectric RAM – Performance
FeRAM is based on the physical movement of atoms in response to an external field, which happens to be extremely fast, settling in about 1 ns
Ferroelectric RAM – Performance
In comparison to flash, the advantages are much more obvious. Whereas the read operation is likely to be similar in performance, the charge pump used for writing requires a considerable time to “build up” current, a process that FeRAM does not need. Flash memories commonly need a millisecond or more to complete a write, whereas current FeRAMs may complete a write in less than 150 ns.
Ferroelectric RAM – Performance
The theoretical performance of FeRAM is not entirely clear. Existing 350 nm devices have read times on the order of 50-60 ns. Although slow compared to modern DRAMs, which can be found with times on the order of 2 ns, common 350 nm DRAMs operated with a read time of about 35 ns, so FeRAM performance appears to be comparable given the same fabrication technology.
Magnetoresistive random-access memory – Performance
DRAM performance is limited by the rate at which the charge stored in the cells can be drained (for reading) or stored (for writing)
Magnetoresistive random-access memory – Performance
This makes it expensive, which is why it is used only for small amounts of high-performance memory, notably the CPU cache in almost all modern CPU designs.
Magnetoresistive random-access memory – Performance
Although MRAM is not quite as fast as SRAM, it is close enough to be interesting even in this role. Given its much higher density, a CPU designer may be inclined to use MRAM to offer a much larger but somewhat slower cache, rather than a smaller but faster one. It remains to be seen how this trade-off will play out in the future.
Satellite Internet access – 2013 FCC report cites big jump in satellite performance
In its report released in February, 2013, the Federal Communications Commission noted significant advances in satellite Internet performance. The FCC’s Measuring Broadband America report also ranked the major ISPs by how close they came to delivering on advertised speeds. In this category, satellite Internet topped the list, with 90% of subscribers seeing speeds at 140% or better than what was advertised.
Adaptive performance
In previous literature, Pulakos and colleagues established eight dimensions of adaptive performance
Adaptive performance – Dimensions
Handling emergencies and crisis situations: making quick decisions when faced with an emergency
Adaptive performance – Dimensions
Handling stress in the workforce: keeping composed and focused on task at hand when dealing with high demand tasks
Adaptive performance – Dimensions
Creative problem solving: thinking outside the boundary limits, and innovatively to solve a problem
Adaptive performance – Dimensions
Dealing with uncertain and unpredictable work situations: able to become productive despite the occurrence of unknown situations
Adaptive performance – Dimensions
Learning and manipulating new technology, task, and procedures: approach new methods and technological constructs in order to accomplish a work task
Adaptive performance – Dimensions
Demonstrating cultural adaptability: being respectful and considerate of different cultural backgrounds
Adaptive performance – Dimensions
Demonstrating physically oriented adaptability: physically adjusting one ’s self to better fit the surrounding environment
Adaptive performance – Measurement
Therefore there is a difference between I-ADAPT-M and the JAI which measures adaptive performance as behaviors
Adaptive performance – Work stress and adaptive performance
Not only can work stress predict adaptive performance to a considerable extent, there are also a lot of overlaps between adaptive performance and stress coping.
Adaptive performance – Stress appraisal
Challenging rather than threatening appraisals would lead to higher levels of self-efficacy, and thus benefit individuals’ adaptive performance.
Adaptive performance – Stress coping
Therefore, adaptive performance is more likely to contain such behaviors in stressful situations.
Adaptive performance – Definition of team adaptive performance
Team adaptive performance also has different antecedents compared with individual adaptive performance.
Adaptive performance – Predictors of team adaptive performance
Team learning climate also displays a significant, positive relationship with team adaptive performance.
Adaptive performance – Leadership and adaptive performance
Adaptive performance in leadership is valued by employers because an employee who displays those two characteristics tends to exemplify and motivate adaptive behavior within other individuals in the workforce.
Adaptive performance – Tranformational leadership and adaptive performance
This particular leadership style has also been shown as a motivator to increase the behavior of performance and adaptability in employees
Adaptive performance – Leadership and adaptive decision making
By a leader displaying adaptive performance when making a decision, the team leader shows their awareness of a situation leading to new actions and strategies to reestablish fit and effectiveness
Software testing – Software performance testing
Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Software testing – Software performance testing
Load testing is primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number of users
Software testing – Software performance testing
There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing, scalability testing, and volume testing, are often used interchangeably.
Software testing – Software performance testing
Real-time software systems have strict timing constraints. To test if timing constraints are met, real-time testing is used.
Standard RAID levels – Performance
Note that these are best case performance scenarios with optimal access patterns.
Standard RAID levels – Performance (speed)
RAID 6 does not have a performance penalty for read operations, but it does have a performance penalty on write operations because of the overhead associated with parity calculations. Performance varies greatly depending on how RAID 6 is implemented in the manufacturer’s storage architecture – in software, firmware or by using firmware and specialized ASICs for intensive parity calculations. It can be as fast as a RAID-5 system with one fewer drive (same number of data drives).
Comparison of programming paradigms – Performance comparison
Purely in terms of total instruction path length, a program coded in an imperative style, without using any subroutines at all, would have the lowest count. However, the binary size of such a program might be larger than the same program coded using subroutines (as in functional and procedural programming) and would reference more “non-local” physical instructions that may increase cache misses and increase instruction fetch overhead in modern processors.
Comparison of programming paradigms – Performance comparison
The paradigms that use subroutines extensively (including functional, procedural and object-oriented) and do not also use significant inlining (via compiler optimizations) will, consequently, use a greater percentage of total resources on the subroutine linkages themselves
Huawei – Recent performance
In April 2011, Huawei announced an earnings increase of 30% in 2010, driven by significant growth in overseas markets, with net profit rising to RMB23.76 billion (US$3.64 billion; £2.23 billion) from RMB18.27 billion in 2009
Huawei – Recent performance
Huawei’s revenues in 2010 accounted for 15.7% of the $78.56 billion global carrier-network-infrastructure market, putting the company second behind the 19.6% share of Telefon AB L.M. Ericsson, according to market-research firm Gartner.
Huawei – Recent performance
Huawei is targeting a revenue of $150 million through its enterprise business solutions in India in next 12 months. It denied using Chinese subsidies to gain global market share after being recently accused by US lawmakers and EU officials of unfair competition.
Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents 1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation
Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.
Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents 1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation
Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.
Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents 1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation
EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.
Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents 1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation
Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.
Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents 1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation
In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.
Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents 1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation
There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic
Visual Basic – Performance and other issues
Earlier versions of Visual Basic (prior to version 5) compiled the code to P-Code only. The P-Code is interpreted by the language runtime. The benefits of P-Code include portability and smaller binary file sizes, but it usually slows down the execution, since having a runtime adds an additional layer of interpretation. However, small amounts of code and algorithms can be constructed to run faster than compiled native code.
Visual Basic – Performance and other issues
Visual Basic applications require Microsoft Visual Basic runtime MSVBVMxx.DLL, where xx is the relevant version number, either 50 or 60. MSVBVM60.dll comes as standard with Windows in all editions after Windows 98 while MSVBVM50.dll comes with all editions after Windows 95. A Windows 95 machine would however require inclusion with the installer of whichever dll was needed by the program.
Visual Basic – Performance and other issues
Visual Basic 5 and 6 can compile code to either native or P-Code but in either case the runtime is still required for built in functions and forms management.
Visual Basic – Performance and other issues
Criticisms levelled at Visual Basic editions prior to VB.NET include:
Visual Basic – Performance and other issues
Versioning problems associated with various runtime DLLs, known as DLL hell
Visual Basic – Performance and other issues
Poor support for object-oriented programming
Visual Basic – Performance and other issues
Inability to create multi-threaded applications, without resorting to Windows API calls
Visual Basic – Performance and other issues
Variant types have a greater performance and storage overhead than strongly typed programming languages
Biometrics – Performance
The following are used as performance metrics for biometric systems:
Biometrics – Performance
false acceptance rate or false match rate (FAR or FMR): the probability that the system incorrectly matches the input pattern to a non-matching template in the database. It measures the percent of invalid inputs which are incorrectly accepted. In case of similarity scale, if the person is imposter in real, but the matching score is higher than the threshold, then he is treated as genuine that increase the FAR and hence performance also depends upon the selection of threshold value.
Biometrics – Performance
false rejection rate or false non-match rate (FRR or FNMR): the probability that the system fails to detect a match between the input pattern and a matching template in the database. It measures the percent of valid inputs which are incorrectly rejected.
Biometrics – Performance
This more linear graph illuminates the differences for higher performances (rarer errors).
Biometrics – Performance
equal error rate or crossover error rate (EER or CER): the rate at which both accept and reject errors are equal. The value of the EER can be easily obtained from the ROC curve. The EER is a quick way to compare the accuracy of devices with different ROC curves. In general, the device with the lowest EER is most accurate.
Biometrics – Performance
failure to enroll rate (FTE or FER): the rate at which attempts to create a template from an input is unsuccessful. This is most commonly caused by low quality inputs.
Biometrics – Performance
failure to capture rate (FTC): Within automatic systems, the probability that the system fails to detect a biometric input when presented correctly.
Biometrics – Performance
template capacity: the maximum number of sets of data which can be stored in the system.
Garbage collection (computer science) – Performance implications
Tracing garbage collectors require some implicit runtime overhead that may be beyond the control of the programmer, and can sometimes lead to performance problems. For example, commonly used stop-the-world garbage collectors, which pause program execution at arbitrary times, may make garbage collection inappropriate for some embedded systems, high-performance server software, and applications with real-time needs.
Garbage collection (computer science) – Performance implications
Manual heap allocation
Garbage collection (computer science) – Performance implications
search for best/first-fit block of sufficient size
Garbage collection (computer science) – Performance implications
Memory allocation in a garbage collected language may be implemented using heap allocation behind the scenes (rather than simply incrementing a pointer), so the performance advantages listed above don’t necessarily apply in this case
Garbage collection (computer science) – Performance implications
The overhead of write barriers is more likely to be noticeable in an imperative-style program which frequently writes pointers into existing data structures than in a functional-style program which constructs data only once and never changes them.
Garbage collection (computer science) – Performance implications
Generational collection techniques are used with both stop-the-world and incremental collectors to increase performance; the trade-off is that some garbage is not detected as such for longer than normal.
Dow Chemical Company – Performance plastics
Performance plastics make up 25% of Dow’s sales, with many products designed for the automotive and construction industries. The plastics include polyolefins such as polyethylene and polypropylene, as well as polystyrene used to produce Styrofoam insulating material. Dow manufactures epoxy resin intermediates including bisphenol A and epichlorohydrin. Saran resins and films are based on polyvinylidene chloride (PVDC)
Dow Chemical Company – Performance chemicals
The Performance Chemicals (17% of sales) segment produces chemicals and materials for water purification, pharmaceuticals, paper coatings, paints and advanced electronics
Halogen lamp – Effect of voltage on performance
Tungsten halogen lamps behave in a similar manner to other incandescent lamps when run on a different voltage
Halogen lamp – Effect of voltage on performance
Halogen lamps are manufactured with enough halogen to match the rate of tungsten evaporation at their design voltage
Sport psychology – Preperformance routines
This includes pregame routines, warm up routines, and actions an athlete will regularly do, mentally and physically, before they execute the performance
Audi A4 – Performance
2.0 TFSI, 8-speed Multitronic CVT 8.2 236 km/h (147 mph) 167 Aus/NZ/ZA
Audi A4 – Performance
3.2 FSI quattro, 6-speed Manual 6.0 250 km/h (155 mph) (elec. limited) 213
Audi A4 – Performance
3.2 FSI quattro, 6-speed Tiptronic 6.1 250 km/h (155 mph) (elec. limited) 215
Audi A4 – Performance
Diesel engines (all common rail (CR) Turbocharged Direct Injection (TDI))
Audi A4 – Performance
2.0 TDI quattro, 6-speed Manual 8.3 226 km/h (140 mph) 149
Audi A4 – Performance
3.0 TDI quattro, 6-speed Tiptronic 6.3 250 km/h (155 mph) (elec. limited) 182
Krytron – Performance
This design, dating from the late 1940s, is still capable of pulse-power performance which even the most advanced semiconductors (even IGBTs) cannot match easily. Krytrons and sprytrons are capable of handling high current high voltage pulses, with very fast switching times, constant, low, and low jitter time delay between application of the trigger pulse and switching on.
Krytron – Performance
A given krytron tube will give very consistent performance to identical trigger pulses (low jitter)
Krytron – Performance
Switching performance is largely independent of the environment (temperature, acceleration, vibration, etc.). The formation of the keepalive glow discharge is however more sensitive, which necessitates the use of a radioactive source to aid its ignition.
Krytron – Performance
Krytrons have a limited lifetime, ranging, according to type, typically from tens of thousands to tens of millions of switching operations, and sometimes only a few hundreds.
Krytron – Performance
Hydrogen-filled thyratrons may be used as a replacement in some applications.
Scramjet – Vehicle performance
The performance of a launch system is complex and depends greatly on its weight. Normally craft are designed to maximise range (), orbital radius () or payload mass fraction () for a given engine and fuel. This results in tradeoffs between the efficiency of the engine (takeoff fuel weight) and the complexity of the engine (takeoff dry weight), which can be expressed by the following:
Scramjet – Vehicle performance
is the empty mass fraction, and represents the weight of the superstructure, tankage and engine.
Scramjet – Vehicle performance
is the fuel mass fraction, and represents the weight of fuel, oxidiser and any other materials which are consumed during the launch.
Scramjet – Vehicle performance
is initial mass ratio, and is the inverse of the payload mass fraction. This represents how much payload the vehicle can deliver to a destination.
Scramjet – Vehicle performance
A scramjet increases the mass of the engine over a rocket, and decreases the mass of the fuel
Scramjet – Vehicle performance
Additionally, the drag of the new configuration must be considered. The drag of the total configuration can be considered as the sum of the vehicle drag () and the engine installation drag (). The installation drag traditionally results from the pylons and the coupled flow due to the engine jet, and is a function of the throttle setting. Thus it is often written as:
Scramjet – Vehicle performance
For an engine strongly integrated into the aerodynamic body, it may be more convenient to think of () as the difference in drag from a known base configuration.
Scramjet – Vehicle performance
The overall engine efficiency can be represented as a value between 0 and 1 (), in terms of the specific impulse of the engine:
Scramjet – Vehicle performance
is the acceleration due to gravity at ground level
Scramjet – Vehicle performance
is fuel heat of reaction
Scramjet – Vehicle performance
Specific impulse is often used as the unit of efficiency for rockets, since in the case of the rocket, there is a direct relation between specific impulse, specific fuel consumption and exhaust velocity. This direct relation is not generally present for airbreathing engines, and so specific impulse is less used in the literature. Note that for an airbreathing engine, both and are a function of velocity.
Scramjet – Vehicle performance
The specific impulse of a rocket engine is independent of velocity, and common values are between 200 and 600 seconds (450s for the space shuttle main engines). The specific impulse of a scramjet varies with velocity, reducing at higher speeds, starting at about 1200s, although values in the literature vary.
Scramjet – Vehicle performance
For the simple case of a single stage vehicle, the fuel mass fraction can be expressed as:
Scramjet – Vehicle performance
or for level atmospheric flight from air launch (missile flight):
Scramjet – Vehicle performance
Where is the range, and the calculation can be expressed in the form of the Breguet range formula:
Scramjet – Vehicle performance
This extremely simple formulation, used for the purposes of discussion assumes:
Scramjet – Vehicle performance
However they are true generally for all engines.
JPEG 2000 – Superior compression performance
At high bit rates, artifacts become nearly imperceptible, JPEG 2000 has a small machine-measured fidelity advantage over JPEG. At lower bit rates (e.g., less than 0.25 bits/pixel for grayscale images), JPEG 2000 has a significant advantage over certain modes of JPEG: artifacts are less visible and there is almost no blocking. The compression gains over JPEG are attributed to the use of DWT and a more sophisticated entropy encoding scheme.
JPEG 2000 – Performance
Compared to the previous JPEG standard, JPEG 2000 delivers a typical compression gain in the range of 20%, depending on the image characteristics
Perl – Comparative performance
The Computer Language Benchmarks Game, a project hosted by Alioth, compares the performance of implementations of typical programming problems in several programming languages. The submitted Perl implementations typically perform toward the high end of the memory-usage spectrum and give varied speed results. Perl’s performance in the benchmarks game is typical for interpreted languages.
Perl – Comparative performance
Large Perl programs start more slowly than similar programs in compiled languages because perl has to compile the source every time it runs
Perl – Comparative performance
A number of tools have been introduced to improve this situation. The first such tool was Apache’s mod_perl, which sought to address one of the most-common reasons that small Perl programs were invoked rapidly: CGI Web development. ActivePerl, via Microsoft ISAPI, provides similar performance improvements.
Perl – Comparative performance
Once Perl code is compiled, there is additional overhead during the execution phase that typically isn’t present for programs written in compiled languages such as C or C++. Examples of such overhead include bytecode interpretation, reference-counting memory management, and dynamic type-checking.
Laptop – Performance
The upper limits of performance of laptops remain much lower than the highest-end desktops (especially “workstation class” machines with two processor sockets), and “bleeding-edge” features usually appear first in desktops and only then, as the underlying technology matures, are adapted to laptops.
Laptop – Performance
For Internet browsing and typical office applications, where the computer spends the majority of its time waiting for the next user input, even relatively low-end laptops (such as Netbooks) can be fast enough for some users. As of mid-2010, at the lowest end, the cheapest netbooks—between US$200–300—remain more expensive than the lowest-end desktop computers (around US$200) only when those are priced without a screen/monitor. Once an inexpensive monitor is added, the prices are comparable.
Laptop – Performance
Most higher-end laptops are sufficiently powerful for high-resolution movie playback, some 3D gaming and video editing and encoding
Laptop – Performance
Some manufacturers work around this performance problem by using desktop CPUs for laptops.
Progress in artificial intelligence – Performance evaluation
The broad classes of outcome for an AI test are:
Progress in artificial intelligence – Performance evaluation
par-human: performs similarly to most humans
Peter Chen – Computer performance modeling
In his early career, he was active in R&D activities in computer system performance. He was the program chair of an ACM SIGMETRICS conference. He developed a computer performance model for a major computer vendor. His innovative research results were adopted in commercial compter performance tuning and capacity planning tools.
Shared leadership – Team effectiveness/performance
Similarly, other studies have explored the extent to which shared leadership can predict a team’s effectiveness or performance, and have found it to be a significant predictor, and often a better predictor than vertical leadership.
Shared leadership – Team effectiveness/performance
Thus, they theorized, having more leaders is not the only factor that matters to team performance; rather, leaders must recognize other leaders as such in order for them to contribute positively to team effectiveness.
Cordless telephone – Performance
Manufacturers usually advertise that higher frequency systems improve audio quality and range. Higher frequencies actually have worse propagation in the ideal case, as shown by the basic Friis transmission equation, and path loss tends to increase at higher frequencies as well. More important influences on quality and range are signal strength, antenna quality, the method of modulation used, and interference, which varies locally.
Cordless telephone – Performance
“Plain old telephone service” (POTS) landlines are designed to transfer audio with a quality that is just enough for the parties to understand each other
Cordless telephone – Performance
A noticeable amount of constant background noise (This is not interference from outside sources, but noise within the cordless telephone system.)
Cordless telephone – Performance
Frequency response not being the full frequency response available in a wired landline telephone
Cordless telephone – Performance
Most manufacturers claim a range of about 30 m (100 ft) for their 2.4 GHz and 5.8 GHz systems, but inexpensive models often fall short of this claim.
Cordless telephone – Performance
However, the higher frequency often brings advantages
Cordless telephone – Performance
The recently allocated 1.9 GHz band is reserved for use by phones that use the DECT standard, which should avoid interference issues that are increasingly being seen in the unlicensed 900 MHz, 2.4 GHz, and 5.8 GHz bands.
Cordless telephone – Performance
Many cordless phones in the early 21st century are digital. Digital technology has helped provide clear sound and limit eavesdropping. Many cordless phones have one main base station and can add up to three or four additional bases. This allows for multiple voice paths that allow three-way conference calls between the bases. This technology also allows multiple handsets to be used at the same time and up to two handsets can have an outside conversation.
Speech recognition – High-performance fighter aircraft
Substantial efforts have been devoted in the last decade to the test and evaluation of speech recognition in fighter aircraft
Speech recognition – High-performance fighter aircraft
Working with Swedish pilots flying in the JAS-39 Gripen cockpit, Englund (2004) found recognition deteriorated with increasing G-loads
Speech recognition – High-performance fighter aircraft
The Eurofighter Typhoon currently in service with the UK RAF employs a speaker-dependent system, i.e
Speech recognition – High-performance fighter aircraft
Speaker independent systems are also being developed and are in testing for the F35 Lightning II (JSF) and the Alenia Aermacchi M-346 Master lead-in fighter trainer. These systems have produced word accuracies in excess of 98%.
Speech recognition – Performance
The performance of speech recognition systems is usually evaluated in terms of accuracy and speed. Accuracy is usually rated with word error rate (WER), whereas speed is measured with the real time factor. Other measures of accuracy include Single Word Error Rate (SWER) and Command Success Rate (CSR).
Speech recognition – Performance
However, speech recognition (by a machine) is a very complex problem. Vocalizations vary in terms of accent, pronunciation, articulation, roughness, nasality, pitch, volume, and speed. Speech is distorted by a background noise and echoes, electrical characteristics. Accuracy of speech recognition vary with the following:
Speech recognition – Performance
Vocabulary size and confusability
Speech recognition – Performance
Isolated, discontinuous, or continuous speech
Speech recognition – Performance
Task and language constraints
Speech recognition – Performance
Robot Interaction Language (ROILA) is a constructed language created to address the problems associated with speech interaction using natural languages. ROILA is constructed on the basis of two important goals, firstly it should be learnable by the human user and secondly, the language should be optimized for efficient recognition by a robot.
Kerosene lamp – Performance
Wick-type lamps have the lowest light output, and pressurized lamps have higher output; the range is from 20 to 100 lumens. A kerosene lamp producing 37 lumens for 4 hours per day will consume about 3 litres of kerosene per month.
XSLT – Performance
This gives substantial performance benefits in online publishing applications, where the same transformation is applied many times per second to different source documents
XSLT – Performance
Early XSLT processors had very few optimizations
Channel (communications) – Channel performance measures
These are examples of commonly used channel capacity and performance measures:
Channel (communications) – Channel performance measures
Symbol rate in baud, pulses/s or symbols/s
Channel (communications) – Channel performance measures
Digital bandwidth bit/s measures: gross bit rate (signalling rate), net bit rate (information rate), channel capacity, and maximum throughput
Channel (communications) – Channel performance measures
Channel utilization
Channel (communications) – Channel performance measures
Signal-to-noise ratio measures: signal-to-interference ratio, Eb/No, carrier-to-interference ratio in decibel
Forward error correction – Concatenated FEC codes for improved performance
Classical (algebraic) block codes and convolutional codes are frequently combined in concatenated coding schemes in which a short constraint-length Viterbi-decoded convolutional code does most of the work and a block code (usually Reed-Solomon) with larger symbol size and block length “mops up” any errors made by the convolutional decoder
Forward error correction – Concatenated FEC codes for improved performance
Concatenated codes have been standard practice in satellite and deep space communications since Voyager 2 first used the technique in its 1986 encounter with Uranus. The Galileo craft used iterative concatenated codes to compensate for the very high error rate conditions caused by having a failed antenna.
Performance-based advertising
Performance-based advertising is a form of advertising in which the purchaser pays only when there are measurable results. Performance-based advertising is becoming more common with the spread of electronic media, notably the Internet, where it is possible to measure user actions resulting from advertisement.
Performance-based advertising – Pricing models
There are four common pricing models used in the online performance advertising market.
Performance-based advertising – Pricing models
CPM (Cost-per-Mille, or Cost-per-Thousand) Pricing Models charge advertisers for impressions, i.e. the number of times people view an advertisement. Display advertising is commonly sold on a CPM pricing model. The problem with CPM advertising is that advertisers are charged even if the target audience does not click on the advertisement.
Performance-based advertising – Pricing models
CPC (Cost-per-Click) advertising overcomes this problem by charging advertisers only when the consumer clicks on the advertisement. However, due to increased competition, search keywords have become very expensive. A 2007 Doubleclick Performics Search trends Report shows that there were nearly six times as many keywords with a cost per click (CPC) of more than $1 in January 2007 than the prior year. The cost per keyword increased by 33% and the cost per click rose by as much as 55%.
Performance-based advertising – Pricing models
In recent times, there has been a rapid increase in online lead generation – banner and direct response advertising that works off a CPL pricing model. In a Cost-per-Lead pricing model, advertisers pay only for qualified leads – irrespective of the clicks or impressions that went into generating the lead. CPL advertising is also commonly referred to as online lead generation.
Performance-based advertising – Pricing models
Cost per Lead (CPL) pricing models are the most advertiser friendly. A recent IBM research study found that two-thirds of senior marketers expect 20 percent of ad revenue to move away from impression-based sales, in favor of action-based models within three years. CPL models allow advertisers to pay only for qualified leads as opposed to clicks or impressions and are at the pinnacle of the online advertising ROI hierarchy.
Performance-based advertising – Pricing models
In CPA advertising, advertisers pay for a specific action such as a Creditcard transaction (also called CPO, Cost-Per-Order).
Performance-based advertising – Pricing models
Advertisers need to be careful when choosing between CPL and CPA pricing models.
Performance-based advertising – Pricing models
In CPL campaigns, advertisers pay for an interested lead – i.e. the contact information of a person interested in the advertiser’s product or service. CPL campaigns are suitable for brand marketers and direct response marketers looking to engage consumers at multiple touch-points – by building a newsletter list, community site, reward program or member acquisition program.
Performance-based advertising – Pricing models
In CPA campaigns, the advertiser typically pays for a completed sale involving a Creditcard transaction. CPA is all about ‘now’ — it focuses on driving consumers to buy at that exact moment. If a visitor to the website doesn’t buy anything, there’s no easy way to re-market to them.
Performance-based advertising – Pricing models
1. CPL campaigns are advertiser-centric. The advertiser remains in control of their brand, selecting trusted and contextually relevant publishers to run their offers. On the other hand, CPA and affiliate marketing campaigns are publisher-centric. Advertisers cede control over where their brand will appear, as publishers browse offers and pick which to run on their websites. Advertisers generally do not know where their offer is running.
Performance-based advertising – Pricing models
2. CPL campaigns are usually high volume and light-weight. In CPL campaigns, consumers submit only basic contact information. The transaction can be as simple as an email address. On the other hand, CPA campaigns are usually low volume and complex. Typically, consumer has to submit Creditcard and other detailed information.
Performance-based advertising – Pricing models
CPL advertising is more appropriate for advertisers looking to deploy acquisition campaigns by re-marketing to end consumers through e-newsletters, community sites, reward programs, loyalty programs and other engagement vehicles.
Performance-based advertising – Economic benefits
Many advertisers have limited budgets and may not understand the most effective method of advertising. With performance-based advertising plans, they avoid the risk of paying large amounts for advertisements that are ineffective. They pay only for results.
Performance-based advertising – Economic benefits
The advertising agency, distributor or publisher assumes the risk, and is therefore motivated to ensure that the advertisement is well-targeted, making best use of the available inventory of advertising space. Electronic media publishers may choose advertisements based on location, time of day, day of week, demographics and performance history, ensuring that they maximize revenue earned from each advertising slot.
Performance-based advertising – Economic benefits
The close attention to targeting is intended to minimize the number of irrelevant advertisements presented to consumers. They see advertisements for products and services that are likely to interest them. Although consumers often state that advertisements are irritating, in many situations they find the advertisement useful if they are relevant.
Performance-based advertising – Metrics
Various types of measurable action may be used in charging for performance-based advertising:
Performance-based advertising – Metrics
Many Internet sites charge for advertising on a “CPM” (Cost per Thousand) or Cost per impression basis. That is, the advertiser pays only when a consumer sees their advertisement. Some would argue that this is not performance-based advertising since there is no measurement of the user response.
Performance-based advertising – Metrics
Internet sites often also offer advertising on a “PPC” (pay per click) basis. Google’s AdWords product and equivalent products from Yahoo!, Microsoft and others support PPC advertising plans.
Performance-based advertising – Metrics
A small but growing number of sites are starting to offer plans on a “Pay per call” basis. The user can click a button to place a VoIP call, or to request a call from the advertiser. If the user requests a call, presumably they are highly likely to make a purchase.
Performance-based advertising – Metrics
Finally, there is considerable research into methods of linking the user’s actions to the eventual purchase: the ideal form of performance measurement.
Performance-based advertising – Metrics
Some Internet sites are markets, bringing together buyers and sellers. eBay is a prominent example of a market operating on an auction basis. Other market sites let the vendors set their price. In either model, the market mediates sales and takes a commission – a defined percentage of the sale value. The market is motivated to give a more prominent position to vendors who achieve high sales value. Markets may be seen as a form of performance-based advertising.
Performance-based advertising – Metrics
The use of mobile coupons also enables a whole new world of metrics within identifying campaign effect. There are several providers of mobile coupon technology that makes it possible to provide unique coupons or barcodes to each individual person and at the same time identify the person downloading it. This makes it possible to follow these individuals during the whole process from downloading until when and where the coupons are redeemed.
Performance-based advertising – Media
Although the Internet introduced the concept of performance-based advertising, it is now spreading into other media.
Performance-based advertising – Media
The mobile telephone is increasingly used as a web browsing device, and can support both pay-per-click and pay-per-call plans
Performance-based advertising – Media
Directory assistance providers are starting to introduce advertising, particularly with “Free DA” services such as the Jingle Networks 1-800-FREE-411, the AT&T 1-800-YELLOWPAGES and the Google 1-800-GOOG-411. The advertiser pays when a caller listens to their advertisement, the equivalent of Internet CPM advertising, when they ask for additional information, or when they place a call.
Performance-based advertising – Media
IPTV promises to eventually combine features of cable television and the Internet. Viewers may see advertisements in a sidebar that are relevant to the show they are watching. They may click on an advertisement to obtain more details, and this action can be measured and used to charge the advertiser.
Performance-based advertising – Media
It is even possible to directly measure the performance of print advertising. The publisher prints a special telephone number in the advertisement, used nowhere else. When a consumer places a call to that number, the call event is recorded and the call is routed to the regular number. The call could only have been generated because of the print advertisement.
Performance-based advertising – Pricing
A publisher may charge defined prices for performance-based advertising, so much per click or call, but it is common for prices to be set through some form of “bidding” or auction arrangement. The advertiser states how much they are willing to pay for a user action, and the publisher provides feedback on how much other advertisers have offered. The actual amount paid may be lower than the amount bid, for example 1 cent more than the next highest bidder.
Performance-based advertising – Pricing
A “bidding” plan does not guarantee that the highest bidder will always be presented in the most prominent advertising slot, or will gain the most user actions. The publisher will want to earn the maximum revenue from each advertising slot, and may decide (based on actual results) that a lower bidder is likely to bring more revenue than a higher bidder – they will pay less but be selected more often.
Performance-based advertising – Pricing
In a competitive market, with many advertisers and many publications, defined prices and bid-based prices are likely to converge on the generally accepted value of an advertising action. This presumably reflects the expected sale value and the profit that will result from the sale. An item like a hotel room or airplane seat that loses all value if not sold may be priced at a higher ratio of sale value than an item like a bag of sand or box of nails that will retain its value over time.
Performance-based advertising – Pricing
A number of companies provide products or services to help optimize the bidding process, including deciding which keywords the advertiser should bid on and which sites will give best performance.
Performance-based advertising – Issues
There is the potential for fraud in performance-based advertising.
Performance-based advertising – Issues
The publication may report excessive performance results, although a reputable publication would be unlikely to take the risk of being exposed by audit.
Performance-based advertising – Issues
A competitor may arrange for automatically generated clicks on an advertisement
Performance-based advertising – Issues
Since the user’s actions are being measured, there are serious concerns of loss of privacy.
Performance-based advertising – Issues
Dellarocas (2010) discusses a number of ways in which performance-based advertising mechanisms can be enhanced to restore efficient pricing.
mdadm – Increasing RAID ReSync Performance
In order to increase the resync speed, we can use a bitmap, which mdadm will use to mark which areas may be out-of-sync. Add the bitmap with the grow option like below:
mdadm – Increasing RAID ReSync Performance
Note: mdadm – v2.6.9 – 10 March 2009 on Centos 5.5 requires this to be run on a stable “clean” array. If the array is rebuilding the following error will be displayed:
mdadm – Increasing RAID ReSync Performance
md: couldn’t update array info. -16
mdadm – Increasing RAID ReSync Performance
then verify that the bitmap was added to the md2 device using
mdadm – Increasing RAID ReSync Performance
you can also adjust Linux kernel limits by editing files /proc/sys/dev/raid/speed_limit_min and /proc/sys/dev/raid/speed_limit_max.
mdadm – Increasing RAID ReSync Performance
You can also edit this with the sysctl utility
Symantec – Application Performance Management business
On January 17, 2008, Symantec announced that it was spinning off its Application Performance Management (APM) business and the i3 product line to Vector Capital. Precise Software Solutions took over development, product management, marketing, and sales for the APM business, launching as an independent company on September 17, 2008.
Information retrieval – Performance and correctness measures
Many different measures for evaluating the performance of information retrieval systems have been proposed. The measures require a collection of documents and a query. All common measures described here assume a ground truth notion of relevancy: every document is known to be either relevant or non-relevant to a particular query. In practice queries may be ill-posed and there may be different shades of relevancy.
NXP Semiconductors – Focus on high-performance mixed signal and standard products
Current president and CEO Rick Clemmer took over from Frans van Houten on January 1, 2009. Clemmer has emphasized the importance of “high performance mixed signal” products as a key focus area for NXP. As of 2011, “standard products” including components such as small signal, power and integrated discretes accounted for 30 percent of NXP’s business.
NXP Semiconductors – Focus on high-performance mixed signal and standard products
On July 26, 2010, NXP announced that it had acquired Jennic based in Sheffield, UK, which now operates as part of its Smart Home and Energy product line, offering wireless connectivity solutions based on ZigBee and JenNet-IP.
NXP Semiconductors – Focus on high-performance mixed signal and standard products
On August 6, 2010, NXP announced its IPO at NASDAQ, with 34,000,000 shares, pricing each $14.
NXP Semiconductors – Focus on high-performance mixed signal and standard products
In December 2010, NXP announced that it would sell its Sound Solutions business to Knowles Electronics, part of Dover Corporation, for $855 million in cash. The acquisition was completed as of July 5, 2011.
NXP Semiconductors – Focus on high-performance mixed signal and standard products
In April 2012, NXP announced its intent to acquire electronic design consultancy Catena to work on automotive applications, to capitalize on growing demand for engine emissions reduction and car-to-infrastructure, car-to-car, and car-to-driver communication.
NXP Semiconductors – Focus on high-performance mixed signal and standard products
In July 2012, NXP sold its high-speed data converter assets to Integrated Device Technology.
NXP Semiconductors – Focus on high-performance mixed signal and standard products
In 2012, revenue for NXP’s Identification business unit was $986 million, up 41% from 2011, in part due to growing sales of NFC chips and secure elements.
NXP Semiconductors – Focus on high-performance mixed signal and standard products
On January 4, 2013, NXP and Cisco announced their investment in Cohda Wireless, an Australian company focused on car-to-car and car-to-infrastructure communications.
NXP Semiconductors – Focus on high-performance mixed signal and standard products
In January 2013, NXP announced 700-900 redundancies worldwide in an effort to cut costs related to “support services”.
NXP Semiconductors – Focus on high-performance mixed signal and standard products
In May 2013, NXP announced that it acquired Code Red Technologies, a provider of embedded software development such as the LPCXpresso IDE and Red Suite.
Nested RAID levels – Performance (speed)
According to manufacturer specifications and official independent benchmarks, in most cases RAID 10 provides better throughput and latency than all other RAID levels except RAID 0 (which wins in throughput).
Nested RAID levels – Performance (speed)
It is the preferable RAID level for I/O-intensive applications such as database, email, and web servers, as well as for any other use requiring high disk performance.
Discrete event simulation – Lab test performance improvement ideas
Many systems improvement ideas are built on sound principles, proven methodologies (Lean, Six Sigma, TQM, etc.) yet fail to improve the overall system. A simulation model allows the user to understand and test a performance improvement idea in the context of the overall system.
Marketing operations – Marketing Performance Measurement
Marketing Performance Measurement should be a logical extension of the Planning and Budgeting exercise that happens before each fiscal year. The goals that are set should be measurable and personal. Every person in the Marketing organization should know what they have to do to help the function, and the company, achieve its goals. Some companies use Management By Objectives (MBOs) to incent employees to meet goals. Other companies simply use the Human Resources Performance Management process.
Marketing operations – Marketing Performance Measurement
Quarterly Operations Reviews represent another good way to monitor Marketing’s progress towards its annual goals. At a Quarterly Operations Review, a CMO typically has direct reports present on achievements relative to the goals that were set. This is a good opportunity to update goals based on information gained during the quarter that has just ended. It is also a good way for Marketing leaders to stay abreast of their peers’ efforts to increase collaboration and eliminate redundant efforts.
Performance management
Performance management (PM) includes activities which ensure that goals are consistently being met in an effective and efficient manner. Performance management can focus on the performance of an organization, a department, employee, or even the processes to build a product of service, as well as many other areas.
Performance management
PM is also known as a process by which organizations align their resources, systems and employees to strategic objectives and priorities.
Performance management
Performance management as referenced on this page in a broad term coined by Dr. Aubrey Daniels in the late 1970s to describe a technology (i.e. science imbedded in applications methods) for managing both behavior and results, two critical elements of what is known as performance.
Performance management – Application
Armstrong and Baron (1998) defined it as a “strategic and integrated approach to increase the effectiveness of companies by improving the performance of the people who work in them and by developing the capabilities of teams and individual contributors.”
Performance management – Application
It may be possible to get all employees to reconcile personal goals with organizational goals and increase productivity and profitability of an organization using this process. It can be applied by organizations or a single department or section inside an organization, as well as an individual person. The performance process is appropriately named the self-propelled performance process (SPPP).
Performance management – Application
First, a commitment analysis must be done where a job mission statement is drawn up for each job. The job mission statement is a job definition in terms of purpose, customers, product and scope. The aim with this analysis is to determine the continuous key objectives and performance standards for each job position.
Performance management – Application
Following the commitment analysis is the work analysis of a particular job in terms of the reporting structure and job description. If a job description is not available, then a systems analysis can be done to draw up a job description. The aim with this analysis is to determine the continuous critical objectives and performance standards for each job.
Performance management – Benefits
Managing employee or system performance and aligning their objectives facilitates the effective delivery of strategic and operational goals. There is a clear and immediate correlation between using performance management programs or software and improved business and organizational results.
Performance management – Benefits
For employee performance management, using integrated software, rather than a spreadsheet based recording system, may deliver a significant return on investment through a range of direct and indirect sales benefits, operational efficiency benefits and by unlocking the latent potential in every employees work day (i.e. the time they spend not actually doing their job). Benefits may include:
Performance management – Benefits
Reduce costs in the organization
Performance management – Benefits
Decreases the time it takes to create strategic or operational changes by communicating the changes through a new set of goals
Performance management – Benefits
Optimizes incentive plans to specific goals for over achievement, not just business as usual
Performance management – Benefits
Improves employee engagement because everyone understands how they are directly contributing to the organizations high level goals
Performance management – Benefits
High confidence in bonus payment process
Performance management – Benefits
Professional development programs are better aligned directly to achieving business level goals
Performance management – Benefits
Helps audit / comply with legislative requirement
Performance management – Benefits
Simplifies communication of strategic goals scenario planning
Performance management – Benefits
Provides well documented and communicated process documentation
Performance management – Organizational Development
In organizational development (OD), performance can be thought of as Actual Results vs Desired Results. Any discrepancy, where Actual is less than Desired, could constitute the performance improvement zone. Performance management and improvement can be thought of as a cycle:
Performance management – Organizational Development
Performance coaching where a manager intervenes to give feedback and adjust performance
Performance management – Organizational Development
Performance appraisal where individual performance is formally documented and feedback delivered
Performance management – Organizational Development
A performance problem is any gap between Desired Results and Actual Results. Performance improvement is any effort targeted at closing the gap between Actual Results and Desired Results.
Performance management – Organizational Development
Other organizational development definitions are slightly different. The U.S. Office of Personnel Management (OPM) indicates that Performance Management consists of a system or process whereby:
Performance management – Organizational Development
Performance is rated or measured and the ratings summarized
Performance management – Implementation
Erica Olsen notes that “Many businesses, even those with well-made plans, fail to implement their strategy. Their problem lies in ineffectively managing their employees once their plan is in place. Sure, they’ve conducted surveys, collected data, gone on management retreats to decide on their organization’s direction– even purchased expensive software to manage their process– but somewhere their plan fails.”
Performance management – Long-cycle Performance Management
Long-cycle Performance Management is usually done on an annual, every 6 months, or quarterly basis. From implementations standpoint, this area is the one that has traditionally received the most attention. This is so for historical reasons, as most performance management techniques/styles predate use of computers.
Performance management – Short-cycle Performance Management
Short-cycle Performance Management (which overlaps with principles of [Agile Software Development]) is usually done on a weekly, by-weekly, or monthly basis. From the implementation standpoint, this sort of management is industry-specific.
Performance management – Micro Performance Management
Micro Performance management is generally done on a by-minute/hour/day basis.
Performance management – Further reading
Business Intelligence and Performance Management: Theory, Systems, and Industrial Applications, P. Rausch, A. Sheta, A. Ayesh (Eds.), Springer Verlag U.K., 2013, ISBN 978-1-4471-4865-4.
Performance management – Further reading
Performance Management: Changing Behavior That Drives Organizational Effectiveness], 4th ed., Dr. Aubrey C. Daniels. Performance Management Publications, 1981, 1984, 1989, 2006. ISBN 0-937100-08-0
Performance management – Further reading
Performance Management – Integrating Strategy Execution, Methodologies, Risk, and Analytics. Gary Cokins, John Wiley & Sons, Inc. 2009. ISBN 978-0-470-44998-1
Performance management – Further reading
Journal of Organizational Behavior Management, Routledge Taylor & Francis Group. Published quarterly. 2009.
Performance management – Further reading
Handbook of Organizational Performance, Thomas C. Mawhinney, William K. Redmon & Carl Merle Johnson. Routledge. 2001.
Performance management – Further reading
Bringing out the Best in People, Aubrey C. Daniels. McGraw-Hill; 2nd edition. 1999. ISBN 978-0071351454
Performance management – Further reading
Improving Performance: How to Manage the White Space in the Organization Chart, Geary A. Rummler & Alan P. Brache. Jossey-Bass; 2nd edition. 1995.
Performance management – Further reading
Human Competence: Engineering Worthy Performance, Thomas F. Gilbert. Pfeiffer. 1996.
Performance management – Further reading
The Values-Based Safety Process: Improving Your Safety Culture with Behavior-Based Safety, Terry E. McSween. John Wiley & Sons. 1995.
Performance management – Further reading
Performance-based Instruction: Linking Training to Business Results, Dale Brethower & Karolyn Smalley. Pfeiffer; Har/Dis edition. 1998.
Performance management – Further reading
Handbook of Applied Behavior Analysis, John Austin & James E. Carr. Context Press. 2000.
Mergers and acquisitions – Improving financial performance
The dominant rationale used to explain M&A activity is that acquiring firms seek improved financial performance. The following motives are considered to improve financial performance:
Mergers and acquisitions – Improving financial performance
Economy of scale: This refers to the fact that the combined company can often reduce its fixed costs by removing duplicate departments or operations, lowering the costs of the company relative to the same revenue stream, thus increasing profit margins.
Mergers and acquisitions – Improving financial performance
Economy of scope: This refers to the efficiencies primarily associated with demand-side changes, such as increasing or decreasing the scope of marketing and distribution, of different types of products.
Mergers and acquisitions – Improving financial performance
Increased revenue or market share: This assumes that the buyer will be absorbing a major competitor and thus increase its market power (by capturing increased market share) to set prices.
Mergers and acquisitions – Improving financial performance
Cross-selling: For example, a bank buying a stock broker could then sell its banking products to the stock broker’s customers, while the broker can sign up the bank’s customers for brokerage accounts. Or, a manufacturer can acquire and sell complementary products.
Mergers and acquisitions – Improving financial performance
Synergy: For example, managerial economies such as the increased opportunity of managerial specialization. Another example is purchasing economies due to increased order size and associated bulk-buying discounts.
Mergers and acquisitions – Improving financial performance
Taxation: A profitable company can buy a loss maker to use the target’s loss as their advantage by reducing their tax liability. In the United States and many other countries, rules are in place to limit the ability of profitable companies to “shop” for loss making companies, limiting the tax motive of an acquiring company.
Mergers and acquisitions – Improving financial performance
Geographical or other diversification: This is designed to smooth the earnings results of a company, which over the long term smoothens the stock price of a company, giving conservative investors more confidence in investing in the company. However, this does not always deliver value to shareholders .
Mergers and acquisitions – Improving financial performance
Resource transfer: resources are unevenly distributed across firms (Barney, 1991) and the interaction of target and acquiring firm resources can create value through either overcoming information asymmetry or by combining scarce resources.
Mergers and acquisitions – Improving financial performance
Vertical integration: Vertical integration occurs when an upstream and downstream firm merge (or one acquires the other)
Mergers and acquisitions – Improving financial performance
Hiring: some companies use acquisitions as an alternative to the normal hiring process. This is especially common when the target is a small private company or is in the startup phase. In this case, the acquiring company simply hires (“acquhires”) the staff of the target private company, thereby acquiring its talent (if that is its main asset and appeal). The target private company simply dissolves and little legal issues are involved.
Mergers and acquisitions – Improving financial performance
Absorption of similar businesses under single management: similar portfolio invested by two different mutual funds namely united money market fund and united growth and income fund, caused the management to absorb united money market fund into united growth and income fund.
Software bug – Performance bugs
Too high computational complexity of algorithm.
Burn down chart – Measuring performance
Actual Work Line is above the Ideal Work Line If the actual work line is above the ideal work line, it means that there is more work left than originally predicted and the project is behind schedule.
Burn down chart – Measuring performance
Actual Work Line is below the Ideal Work Line If the actual work line is below the ideal work line, it means that there is less work left than originally predicted and the project is ahead of schedule.
Burn down chart – Measuring performance
The above table is only one way of interpreting the shape of the burn down chart. There are others.
Earned value management – Simple implementations (emphasizing only technical performance)
The first step is to define the work
Earned value management – Simple implementations (emphasizing only technical performance)
The second step is to assign a value, called planned value (PV), to each activity
Earned value management – Simple implementations (emphasizing only technical performance)
The third step is to define “earning rules” for each activity
Earned value management – Simple implementations (emphasizing only technical performance)
In fact, waiting to update EV only once per month (simply because that is when cost data are available) only detracts from a primary benefit of using EVM, which is to create a technical performance scoreboard for the project team.
Earned value management – Simple implementations (emphasizing only technical performance)
If these three home construction projects were measured with the same PV valuations, the relative schedule performance of the projects can be easily compared.
Earned value management – Intermediate implementations (integrating technical and schedule performance)
A second layer of EVM skill can be very helpful in managing the schedule performance of these “intermediate” projects
Earned value management – Intermediate implementations (integrating technical and schedule performance)
However, EVM schedule performance, as illustrated in Figure 2 provides an additional indicator — one that can be communicated in a single chart
Earned value management – Intermediate implementations (integrating technical and schedule performance)
Although such intermediate implementations do not require units of currency (e.g., dollars), it is common practice to use budgeted dollars as the scale for PV and EV. It is also common practice to track labor hours in parallel with currency. The following EVM formulas are for schedule management, and do not require accumulation of actual cost (AC). This is important because it is common in small and intermediate size projects for true costs to be unknown or unavailable.
Earned value management – Intermediate implementations (integrating technical and schedule performance)
SV greater than 0 is good (ahead of schedule). The SV will be 0 at project completion because then all of the planned values will have been earned.
Earned value management – Intermediate implementations (integrating technical and schedule performance)
However, Schedule Variance (SV) measured through EVM method is indicative only. To know whether a project is really behind or ahead of schedule (on time completion), Project Manager has to perform critical path analysis based on precedence and inter-dependencies of the project activities.
Earned value management – Intermediate implementations (integrating technical and schedule performance)
SPI greater than 1 is good (ahead of schedule).
Earned value management – Intermediate implementations (integrating technical and schedule performance)
See also earned schedule for a description of known limitations in SV and SPI formulas and an emerging practice for correcting these limitations.
Earned value management – Advanced implementations (integrating cost, schedule and technical performance)
To measure cost performance, planned value (or BCWS – Budgeted Cost of Work Scheduled) and earned value (or BCWP – Budgeted Cost of Work Performed) must be in units of currency (the same units that actual costs are measured.) In large implementations, the planned value curve is commonly called a Performance Measurement Baseline (PMB) and may be arranged in control accounts, summary-level planning packages, planning packages and work packages
Earned value management – Advanced implementations (integrating cost, schedule and technical performance)
In the United States, the primary standard for full-featured EVM systems is the ANSI/EIA-748A standard, published in May 1998 and reaffirmed in August 2002. The standard defines 32 criteria for full-featured EVM system compliance. As of the year 2007, a draft of ANSI/EIA-748B, a revision to the original is available from ANSI. Other countries have established similar standards.
Earned value management – Advanced implementations (integrating cost, schedule and technical performance)
In addition to using BCWS and BCWP, prior to 1998 implementations often use the term Actual Cost of Work Performed (ACWP) instead of AC. Additional acronyms and formulas include:
Earned value management – Advanced implementations (integrating cost, schedule and technical performance)
Budget at completion (BAC): The total planned value (PV or BCWS) at the end of the project. If a project has a Management Reserve (MR), it is typically not included in the BAC, and respectively, in the Performance Measurement Baseline.
Earned value management – Advanced implementations (integrating cost, schedule and technical performance)
CV greater than 0 is good (under budget).
Earned value management – Advanced implementations (integrating cost, schedule and technical performance)
CPI greater than 1 is good (under budget):
Earned value management – Advanced implementations (integrating cost, schedule and technical performance)
Having a CPI that is very high (in some cases, very high is only 1.2) may mean that the plan was too conservative, and thus a very high number may in fact not be good, as the CPI is being measured against a poor baseline. Management or the customer may be upset with the planners as an overly conservative baseline ties up available funds for other purposes, and the baseline is also used for manpower planning.
Earned value management – Advanced implementations (integrating cost, schedule and technical performance)
The TCPI provides a projection of the anticipated performance required to achieve either the BAC or the EAC. TCPI indicates the future required cost efficiency needed to achieve a target BAC (Budget At Complete) or EAC (Estimate At Complete). Any significant difference between CPI, the cost performance to date, and the TCPI, the cost performance needed to meet the BAC or the EAC, should be accounted for by management in their forecast of the final cost.
Earned value management – Advanced implementations (integrating cost, schedule and technical performance)
For the TCPI based on BAC (describing the performance required to meet the original BAC budgeted total):
Earned value management – Advanced implementations (integrating cost, schedule and technical performance)
or for the TCPI based on EAC (describing the performance required to meet a new, revised budget total EAC):
Earned value management – Advanced implementations (integrating cost, schedule and technical performance)
The IEAC is a metric to project total cost using the performance to date to project overall performance. This can be compared to the EAC, which is the manager’s projection.
Earned value management – Schedule Performance
The use of SPI in EVM is rather limited in forecasting schedule performance problems because it is dependent on the completion of earned value on the Critical Time Path(CTP).
Earned value management – Schedule Performance
Because Agile EVM is used in a complex environment, any earned value is more likely to be on the CTP. The latest estimate for the number of fixed time intervals can be calculated in Agile EVM as:
Earned value management – Schedule Performance
Initial Duration in number of fixed time intervals / SPI or;
Earned value management – Schedule Performance
Latest Estimate in total number of Story Points / Velocity.
Performance engineering
Performance engineering within systems engineering, encompasses the set of roles, skills, activities, practices, tools, and deliverables applied at every phase of the systems development life cycle which ensures that a solution will be designed, implemented, and operationally supported to meet the non-functional performance requirements defined for the solution.
Performance engineering
As such, the term is typically used to describe the processes, people and technologies required to effectively test non-functional requirements, ensure adherence to service levels and optimize application performance prior to deployment.
Performance engineering
The term performance engineering encompasses more than just the software and supporting infrastructure, and as such the term performance engineering is preferable from a macro view. Adherence to the non-functional requirements is also validated post-deployment by monitoring the production systems. This is part of IT service management (see also ITIL).
Performance engineering
Performance engineering has become a separate discipline at a number of large corporations, with tasking separate but parallel to Systems Engineering. It is pervasive, involving people from multiple organizational units; but predominantly within the information technology organization.
Performance engineering – Performance engineering objectives
Increase business revenue by ensuring the system can process transactions within the requisite timeframe
Performance engineering – Performance engineering objectives
Eliminate system failure requiring scrapping and writing off the system development effort due to performance objective failure
Performance engineering – Performance engineering objectives
Eliminate late system deployment due to performance issues
Performance engineering – Performance engineering objectives
Eliminate avoidable system rework due to performance issues
Performance engineering – Performance engineering objectives
Avoid additional and unnecessary hardware acquisition costs
Performance engineering – Performance engineering objectives
Reduce increased software maintenance costs due to performance problems in production
Performance engineering – Performance engineering objectives
Reduce additional operational overhead for handling system issues due to performance problems
Performance engineering – Performance engineering approach
Because this discipline is applied within multiple methodologies, the following activities will occur within differently specified phases. However if the phases of the rational unified process (RUP) are used as a framework, then the activities will occur as follows:
Performance engineering – Inception
During this first conceptual phase of a program or project, critical business processes are identified. Typically they are classified as critical based upon revenue value, cost savings, or other assigned business value. This classification is done by the business unit, not the IT organization.
Performance engineering – Inception
High level risks that may impact system performance are identified and described at this time. An example might be known performance risks for a particular vendor system.
Performance engineering – Inception
Finally performance activities, roles, and deliverables are identified for the Elaboration phase. Activities and resource loading are incorporated into the Elaboration phase project plans.
Performance engineering – Elaboration
During this defining phase, the critical business processes are decomposed to critical use cases. Such use cases will be decomposed further, as needed, to single page (screen) transitions. These are the use cases that will be subjected to script driven performance testing.
Performance engineering – Elaboration
The type of requirements that relate to Performance Engineering are the non-functional requirements, or NFR. While a functional requirement relates to what business operations are to be performed, a performance related non-functional requirement will relate to how fast that business operation performs under defined circumstances.
Performance engineering – Elaboration
The concept of “defined circumstances” is vital. For example:
Performance engineering – Elaboration
Invalid – the system should respond to user input within 10 seconds.
Performance engineering – Elaboration
Valid – for use case ABC the system will respond to a valid user entry within 5 seconds for a median load of 250 active users and 2000 logged in users 95% of the time; or within 10 seconds for a peak load of 500 active users and 4000 logged in users 90% of the time.
Performance engineering – Elaboration
Testers may build a reliable performance test for the second example, but not for the invalid example.
Performance engineering – Elaboration
Each critical use case must have an associated NFR. If, for a given use case, no existing NFR is applicable, a new NFR specific to that use case must be created.
Performance engineering – Elaboration
Non functional requirements are not limited to use cases
Performance engineering – Elaboration
The system volumetrics documented in the NFR documentation will be used as inputs for both load testing and stress testing of the system during the performance test. Computer scientist have been using all kinds of approaches, e.g., Queueing Theory, to develop performance evaluation models.
Performance engineering – Elaboration
At this point it is suggested that performance modeling be performed using the use case information as input. This may be done using a performance lab, and using prototypes and mockups of the “to be” system; or a vendor provided modeling tool may be used; or even merely a spreadsheet workbook, where each use case is modeled in a single sheet, and a summary sheet is used to provide high level information for all of the use cases.
Performance engineering – Elaboration
It is recommended that Unified Modeling Language sequence diagrams be generated at the physical tier level for each use case. The physical tiers are represented by the vertical object columns, and the message communication between the tiers by the horizontal arrows. Timing information should be associated with each horizontal arrow; this should correlate with the performance model.
Performance engineering – Elaboration
Some performance engineering activities related to performance testing should be executed in this phase. They include validating a performance test strategy, developing a performance test plan, determining the sizing of test data sets, developing a performance test data plan, and identifying performance test scenarios.
Performance engineering – Elaboration
For any system of significant impact, a monitoring plan and a monitoring design are developed in this phase. Performance engineering applies a subset of activities related to performance monitoring, both for the performance test environment as well as for the production environment.
Performance engineering – Elaboration
The risk document generated in the previous phase is revisited here. A risk mitigation plan is determined for each identified performance risk; and time, cost, and responsibility is determined and documented.
Performance engineering – Elaboration
Finally performance activities, roles, and deliverables are identified for the Construction phase. Activities and resource loading are incorporated into the Construction phase project plans. These will be elaborated for each iteration.
Performance engineering – Construction
Early in this phase a number of performance tool related activities are required. These include:
Performance engineering – Construction
Identify key development team members as subject matter experts for the selected tools
Performance engineering – Construction
Specify a profiling tool for the development/component unit test environment
Performance engineering – Construction
Specify an automated unit (component) performance test tool for the development/component unit test environment; this is used when no GUI yet exists to drive the components under development
Performance engineering – Construction
Specify an automated tool for driving server-side unit (components) for the development/component unit test environment
Performance engineering – Construction
Specify an automated multi-user capable script-driven end-to-end tool for the development/component unit test environment; this is used to execute screen-driven use cases
Performance engineering – Construction
Identify a database test data load tool for the development/component unit test environment; this is required to ensure that the database optimizer chooses correct execution paths and to enable reinitializing and reloading the database as needed
Performance engineering – Construction
Presentations and training must be given to development team members on the selected tools
Performance engineering – Construction
A member of the performance engineering practice and the development technical team leads should work together to identify performance-oriented best practices for the development team. Ideally the development organization should already have a body of best practices, but often these do not include or emphasize those best practices that impact system performance.
Performance engineering – Construction
The concept of application instrumentation should be introduced here with the participation of the IT Monitoring organization. Several vendor monitoring systems have performance capabilities, these normally operate at the operating system, network, and server levels; e.g. CPU utilization, memory utilization, disk I/O, and for J2EE servers the JVM performance including garbage collection.
Performance engineering – Construction
But this type of monitoring does not permit the tracking of use case level performance
Performance engineering – Construction
Then as the performance test team starts to gather data, they should commence tuning the environment more specifically for the system to be deployed
Performance engineering – Construction
The data gathered, and the analyses, will be fed back to the group that does performance tuning
Performance engineering – Construction
However, if for some reason (perhaps proper performance engineering working practices were not applied) there are tests that cannot be tuned into compliance, then it will be necessary to return portions of the system to development for refactoring
Performance engineering – Construction
For example: suppose we can improve 70% of a module by parallelizing it, and run on 4 CPUs instead of 1 CPU. If ? is the fraction of a calculation that is sequential, and (1-?) is the fraction that can be parallelized, then the maximum speedup that can be achieved by using P processors is given according to Amdahl’s Law:
Performance engineering – Construction
In this example we would get: 1/(.3+(1-.3)/4)=2.105. So for quadrupling the processing power we only doubled the performance (from 1 to 2.105). And we are now well on the way to diminishing returns. If we go on to double the computing power again from 4 to 8 processors we get 1/(.3+(1-.3)/8)=2.581. So now by doubling the processing power again we only got a performance improvement of about one fifth (from 2.105 to 2.581).
Performance engineering – Transition
During this final phase the system is deployed to the production environment. A number of preparatory steps are required. These include:
Performance engineering – Transition
Configuring the operating systems, network, servers (application, web, database, load balancer, etc.), and any message queueing software according to the base checklists and the optimizations identified in the performance test environment
Performance engineering – Transition
Ensuring all performance monitoring software is deployed and configured
Performance engineering – Transition
Running Statistics on the database after the production data load is completed
Performance engineering – Transition
Once the new system is deployed, ongoing operations pick up performance activities, including:
Performance engineering – Transition
Validating that weekly and monthly performance reports indicate that critical use cases perform within the specified non functional requirement criteria
Performance engineering – Transition
Where use cases are falling outside of NFR criteria, submit defects
Performance engineering – Transition
Identify projected trends from monthly and quarterly reports, and on a quarterly basis, execute capacity planning management activities
Performance engineering – Service management
In the operational domain (post production deployment) performance engineering focuses primarily within three areas: service level management, capacity management, and problem management.
Performance engineering – Service level management
In the service level management area, performance engineering is concerned with service level agreements and the associated systems monitoring that serves to validate service level compliance, detect problems, and identify trends
Performance engineering – Capacity management
Capacity management is charged with ensuring that additional capacity is added in advance of that point (additional CPUs, more memory, new database indexing, et cetera) so that the trend lines are reset and the system will remain within the specified performance range.
Performance engineering – Problem management
Within the problem management domain, the performance engineering practices are focused on resolving the root cause of performance related problems. These typically involve system tuning, changing operating system or device parameters, or even refactoring the application software to resolve poor performance due to poor design or bad coding practices.
Performance engineering – Monitoring
To ensure that there is proper feedback validating that the system meets the NFR specified performance metrics, any major system needs a monitoring subsystem. The planning, design, installation, configuration, and control of the monitoring subsystem is specified by an appropriately defined Monitoring Process. The benefits are as follows:
Performance engineering – Monitoring
It is possible to establish service level agreements at the use case level.
Performance engineering – Monitoring
It is possible to turn on and turn off monitoring at periodic points or to support problem resolution.
Performance engineering – Monitoring
It enables the generation of regular reports.
Performance engineering – Monitoring
It enables the ability to track trends over time – such as the impact of increasing user loads and growing data sets on use case level performance.
Performance engineering – Monitoring
The trend analysis component of this cannot be undervalued. This functionality, properly implemented, will enable predicting when a given application undergoing gradually increasing user loads and growing data sets will exceed the specified non functional performance requirements for a given use case. This permits proper management budgeting, acquisition of, and deployment of the required resources to keep the system running within the parameters of the non functional performance requirements.
Performance engineering – Further reading
Practical Performance Analyst – Performance Engineering Community & Body Of Knowledge
Performance engineering – Further reading
A Performance Process Maturity Model
Performance engineering – Further reading
Exploring UML for Performance Engineering
Performance engineering – Further reading
Introduction to Modeling Based Performance Engineering
Performance engineering – Further reading
Performance and Scalability of Distributed Software Architectures
Performance engineering – Further reading
The Vicious Cycle of Computer Systems Performance and IT Operational Costs
Performance engineering – Further reading
Gathering Performance Requirements
Hyper-V – Degraded performance for Windows XP VMs
Windows XP frequently accesses CPU’s APIC task-priority register (TPR) when interrupt request level changes, causing a performance degradation when running as guests on Hyper-V. Microsoft has fixed this problem in Windows Server 2003 and later.
Hyper-V – Degraded performance for Windows XP VMs
Intel adds TPR virtualization (FlexPriority) to VT-x on Intel Core 2 step E onwards to alleviate this problem. AMD has a similar feature on AMD-V but uses a new register for the purpose. This however means that the guest has to use different instructions to access this new register. AMD provides a driver called “AMD-V Optimization Driver” that has to be installed on the guest to do that.
Performance metric
In project management, performance metrics are used to assess the health of the project and consist of the measuring of seven criteria: safety, time, cost, resources, scope, quality, and actions.
Performance metric
Developing performance metrics usually follows a process of:
Performance metric
Establishing critical processes/customer requirements
Performance metric
Identifying specific, quantifiable outputs of work
Performance metric
Establishing targets against which results can be scored
Performance metric
A criticism of performance metrics is that when the value of information is computed using mathematical methods, it shows that even performance metrics professionals choose measures that have little value. This is referred to as the “measurement inversion”. For example, metrics seem to emphasize what organizations find immediately measurable — even if those are low value — and tend to ignore high value measurements simply because they seem harder to measure (whether they are or not).
Performance metric
To correct for the measurement inversion other methods, like applied information economics, introduce the “value of information analysis” step in the process so that metrics focus on high-value measures. Organizations where this has been applied find that they define completely different metrics than they otherwise would have and, often, fewer metrics.
Performance metric
There are a variety of ways in which organizations may react to results. This may be to trigger specific activity relating to performance (i.e., an improvement plan) or to use the data merely for statistical information. Often closely tied in with outputs, performance metrics should usually encourage improvement, effectiveness and appropriate levels of control.
Performance metric
Performance metrics are often linked in with corporate strategy and are often derived in order to measure performance against a critical success factor.
VMware ESX – Performance limitations
In terms of performance, virtualization imposes a cost in the additional work the CPU has to perform to virtualize the underlying hardware. Instructions that perform this extra work, and other activities that require virtualization, tend to lie in Operating System calls. In an unmodified Operating System, OS calls introduce the greatest portion of virtualization “overhead”.
VMware ESX – Performance limitations
Paravirtualization or other virtualization techniques may help with these issues. VMware developed the Virtual Machine Interface for this purpose, and selected Operating Systems currently support this. A comparison between full virtualization and paravirtualization for the ESX Server shows that in some cases paravirtualization is much faster.
Server Message Block – WAN performance issues
Microsoft has explained that performance issues come about primarily because SMB 1.0 is a block-level rather than a streaming protocol, that was originally designed for small LANs; it has a block size that is limited to 64K, SMB signing creates an additional overhead and the TCP window size is not optimized for WAN links
Procurement – Procurement performance
The report includes the main procurement performance and operational benchmarks that procurement leaders use to gauge the success of their organizations
HP Application Lifecycle Management – HP Performance Center
HP Performance Center software is an enterprise-class performance testing platform and framework. The solution is used by IT departments to standardize, centralize and conduct performance testing. HP Performance Center finds software code flaws across the lifecycle of applications. Built on HP LoadRunner software, HP Performance Center supports developer testing and integrates with HP Application Lifecycle Management.
Business transaction management – Relationship to application performance management
BTM is sometimes categorized as a form of application performance management (APM) or monitoring
History of Apple Inc. – Corporate performance
Under leadership of John Sculley, Apple issued its first corporate stock dividend on May 11, 1987. A month later on June 16, Apple stock split for the first time in a 2:1 split. Apple kept a quarterly dividend with about 0.3% yield until November 21, 1995. Between March 1988 and January 1989, Apple undertook five acquisitions, including software companies Network Innovations, Styleware, Nashoba Systems, and Coral Software, as well as satellite communications company Orion Network Systems.
History of Apple Inc. – Corporate performance
Apple continued to sell both lines of its computers, the Apple II and the Macintosh. A few months after introducing the Mac, Apple released a compact version of the Apple II called the Apple IIc. And in 1986 Apple introduced the Apple IIgs, an Apple II positioned as something of a hybrid product with a mouse-driven, Mac-like operating environment. Even with the release of the first Macintosh, Apple II computers remained the main source of income for Apple for years.
Windows Live OneCare – Performance
Windows Live OneCare Performance Plus is the component that performs monthly PC tune-up related tasks, such as:
Windows Live OneCare – Performance
Disk cleanup and defragmentation.
Windows Live OneCare – Performance
A full virus scan using the anti-virus component in the suite.
Windows Live OneCare – Performance
User notification if files are in need of backing up.
Windows Live OneCare – Performance
Check for Windows updates by using the Microsoft Update service.
Norton 360 – Performance and protection capabilities
Many other reputable sources like Dennis Technology Labs confirm the performance and effectiveness of Norton 2011 and 2012 lines.
Firewall (construction) – Performance based design
Firewalls being used in different application may require different design and performance specifications
Firewall (construction) – Performance based design
Performance based design takes into account the potential conditions during a fire. Understanding thermal limitations of materials is essential to using the correct material for the application.
ZoneAlarm Z100G – Performance
Firewall Throughput – 70 Mbit/s
ZoneAlarm Z100G – Performance
VPN Throughput – 5 Mbit/s (AES)
ZoneAlarm Z100G – Performance
Concurrent Firewall Connections – 4,000
Real-time computing – Real-time and high-performance
Therefore, the most important requirement of a real-time system is predictability and not performance.
Real-time computing – Real-time and high-performance
High-performance is indicative of the amount of processing that is performed in a given amount of time, while real-time is the ability to get done with the processing to yield a useful output in the available time.
Algorithmic efficiency – Benchmarking: measuring performance
Benchmarks can be used by customers when comparing various products from alternative suppliers to estimate which product will best suit their specific requirements in terms of functionality and performance
Algorithmic efficiency – Benchmarking: measuring performance
Some benchmarks provide opportunities for producing an analysis comparing the relative speed of various compiled and interpreted languages for example and The Computer Language Benchmarks Game compares the performance of implementations of typical programming problems in several programming languages.
Algorithmic efficiency – Benchmarking: measuring performance
(Even creating “do it yourself” benchmarks to get at least some appreciation of the relative performance of different programming languages, using a variety of user specified criteria, is quite simple to produce as this “Nine language Performance roundup” by Christopher W. Cowell-Shah demonstrates by example)
Software performance testing
In software engineering, performance testing is in general testing performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.
Software performance testing
Performance testing is a subset of performance engineering, an emerging computer science practice which strives to build performance into the implementation, design and architecture of a system.
Software performance testing – Load testing
Load testing is the simplest form of performance testing
Software performance testing – Stress testing
Stress testing is normally used to understand the upper limits of capacity within the system. This kind of test is done to determine the system’s robustness in terms of extreme load and helps application administrators to determine if the system will perform sufficiently if the current load goes well above the expected maximum.
Software performance testing – Soak testing
Also important, but often overlooked is performance degradation
Software performance testing – Spike testing
Spike testing is done by suddenly increasing the number of or load generated by, users by a very large amount and observing the behaviour of the system. The goal is to determine whether performance will suffer, the system will fail, or it will be able to handle dramatic changes in load.
Software performance testing – Configuration testing
Rather than testing for performance from the perspective of load, tests are created to determine the effects of configuration changes to the system’s components on the system’s performance and behaviour. A common example would be experimenting with different methods of load-balancing.
Software performance testing – Isolation testing
Isolation testing is not unique to performance testing but involves repeating a test execution that resulted in a system problem. Often used to isolate and confirm the fault domain.
Software performance testing – Setting performance goals
Performance testing can serve different purposes.
Software performance testing – Setting performance goals
Or it can measure what parts of the system or workload causes the system to perform badly.
Software performance testing – Setting performance goals
Many performance tests are undertaken without due consideration to the setting of realistic performance goals. The first question from a business perspective should always be “why are we performance testing?”. These considerations are part of the business case of the testing. Performance goals will differ depending on the system’s technology and purpose however they should always include some of the following:
Software performance testing – Server response time
This refers to the time taken for one system node to respond to the request of another. A simple example would be a HTTP ‘GET’ request from browser client to web server. In terms of response time this is what all load testing tools actually measure. It may be relevant to set server response time goals between all nodes of the system.
Software performance testing – Render response time
A difficult thing for load testing tools to deal with as they generally have no concept of what happens within a node apart from recognizing a period of time where there is no activity ‘on the wire’. To measure render response time it is generally necessary to include functional test scripts as part of the performance test scenario which is a feature not offered by many load testing tools.
Software performance testing – Performance specifications
It is critical to detail performance specifications (requirements) and document them in any performance test plan. Ideally, this is done during the requirements development phase of any system development project, prior to any design effort. See Performance Engineering for more details.
Software performance testing – Performance specifications
Without such instrumentation one might have to have someone crouched over Windows Task Manager at the server to see how much CPU load the performance tests are generating (assuming a Windows system is under test).
Software performance testing – Performance specifications
Performance testing can be performed across the web, and even done in different parts of the country, since it is known that the response times of the internet itself vary regionally
Software performance testing – Performance specifications
It is always helpful to have a statement of the likely peak numbers of users that might be expected to use the system at peak times. If there can also be a statement of what constitutes the maximum allowable 95 percentile response time, then an injector configuration could be used to test whether the proposed system met that specification.
Software performance testing – Questions to ask
Performance specifications should ask the following questions, at a minimum:
Software performance testing – Questions to ask
In detail, what is the performance test scope? What subsystems, interfaces, components, etc. are in and out of scope for this test?
Software performance testing – Questions to ask
For the user interfaces (UIs) involved, how many concurrent users are expected for each (specify peak vs. nominal)?
Software performance testing – Questions to ask
What does the target system (hardware) look like (specify all server and network appliance configurations)?
Software performance testing – Questions to ask
What is the Application Workload Mix of each system component? (for example: 20% log-in, 40% search, 30% item select, 10% checkout).
Software performance testing – Questions to ask
What is the System Workload Mix? [Multiple workloads may be simulated in a single performance test] (for example: 30% Workload A, 20% Workload B, 50% Workload C).
Software performance testing – Questions to ask
What are the time requirements for any/all back-end batch processes (specify peak vs. nominal)?
Software performance testing – Pre-requisites for Performance Testing
A stable build of the system which must resemble the production environment as closely as is possible.
Software performance testing – Pre-requisites for Performance Testing
The performance testing environment should be isolated from other environments, such as user acceptance testing (UAT) or development: otherwise the results may not be consistent. As a best practice it is always advisable to have a separate performance testing environment resembling the production environment as much as possible.
Software performance testing – Test conditions
In performance testing, it is often crucial (and often difficult to arrange) for the test conditions to be similar to the expected actual use. This is, however, not entirely possible in actual practice. The reason is that the workloads of production systems have a random nature, and while the test workloads do their best to mimic what may happen in the production environment, it is impossible to exactly replicate this workload variability – except in the most simple system.
Software performance testing – Test conditions
Due to the complexity and financial and time requirements around this activity, some organizations now employ tools that can monitor and create production-like conditions (also referred as “noise”) in their performance testing environments (PTE) to understand capacity and resource requirements and verify / validate quality attributes.
Software performance testing – Timing
performance test environment acquisition and preparation is often a lengthy and time consuming process.
Software performance testing – Tools
In the diagnostic case, software engineers use tools such as profilers to measure what parts of a device or software contributes most to the poor performance or to establish throughput levels (and thresholds) for maintained acceptable response time.
Software performance testing – Technology
The test result shows how the performance varies with the load, given as number of users vs response time
Software performance testing – Technology
Performance testing can be combined with stress testing, in order to see what happens when an acceptable load is exceeded –does the system crash? How long does it take to recover if a large load is reduced? Does it fail in a way that causes collateral damage?
Software performance testing – Technology
It is therefore much faster and cheaper than performance testing, though it requires thorough understanding of the hardware platforms.
Software performance testing – Tasks to undertake
Tasks to perform such a test would include:
Software performance testing – Tasks to undertake
Decide whether to use internal or external resources to perform the tests, depending on inhouse expertise (or lack thereof)
Software performance testing – Tasks to undertake
Gather or elicit performance requirements (specifications) from users and/or business analysts
Software performance testing – Tasks to undertake
Develop a high-level plan (or project charter), including requirements, resources, timelines and milestones
Software performance testing – Tasks to undertake
Develop a detailed performance test plan (including detailed scenarios and test cases, workloads, environment info, etc.)
Software performance testing – Tasks to undertake
Specify test data needed and charter effort (often overlooked, but often the death of a valid performance test)
Software performance testing – Tasks to undertake
Develop proof-of-concept scripts for each application/component under test, using chosen test tools and strategies
Software performance testing – Tasks to undertake
Develop detailed performance test project plan, including all dependencies and associated time-lines
Software performance testing – Tasks to undertake
Install and configure injectors/controller
Software performance testing – Tasks to undertake
Configure the test environment (ideally identical hardware to the production platform), router configuration, quiet network (we don’t want results upset by other users), deployment of server instrumentation, database test sets developed, etc.
Software performance testing – Tasks to undertake
Execute tests – probably repeatedly (iteratively) in order to see whether any unaccounted for factor might affect the results
Software performance testing – Tasks to undertake
Analyze the results – either pass/fail, or investigation of critical path and recommendation of corrective action
Software performance testing – Performance testing web applications
Activity 1
Software performance testing – Performance testing web applications
Additionally, identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what combination of configuration settings will result in the most desirable performance characteristics.
Software performance testing – Performance testing web applications
Activity 3. Plan and Design Tests. Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected. Consolidate this information into one or more models of system usage to be implemented, executed, and analyzed.
Software performance testing – Performance testing web applications
Activity 4. Configure the Test Environment. Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary.
Software performance testing – Performance testing web applications
Activity 6. Execute the Test. Run and monitor your tests. Validate the tests, test data, and results collection. Execute validated tests for analysis while monitoring the test and the test environment.
Software performance testing – Performance testing web applications
Activity 7. Analyze Results, Tune, and Retest. Analyse, Consolidate and share results data. Make a tuning change and retest. Improvement or degradation? Each improvement made will return smaller improvement than the previous improvement. When do you stop? When you reach a CPU bottleneck, the choices then are either improve the code or add more CPU.
Web testing – Web application performance tool
By doing so, the tool is useful to check for bottleneck and performance leakage in the website or web application being tested.
Web testing – Web application performance tool
A WAPT faces various challenges during testing and should be able to conduct tests for:
Web testing – Web application performance tool
Windows application compatibility where required
Web testing – Web application performance tool
WAPT allows a user to specify how virtual users are involved in the testing environment.ie either increasing users or constant users or periodic users load. Increasing user load, step by step is called RAMP where virtual users are increased from 0 to hundreds. Constant user load maintains specified user load at all time. Periodic user load tends to increase and decrease the user load from time to time.
For More Information, Visit: