Category Archives: High-Performance Workplace

23 Observations About RAIN That’ll Blow Your Mind

Download (PPT, 1.3MB)


store.theartofservice.com/itil-2011-foundation-complete-certification-kit-fourth-edition-study-guide-ebook-and-online-course.html

RAIN

Economies of scale Crew size and other operating costs for ships, trains and airplanes

Operating crew size for ships, airplanes, trains, etc., does not increase in direct proportion to capacity. (Operating crew consists of pilots, co-pilots, navigators, etc. and does not include passenger service personnel.) Many aircraft models were significantly lengthened or “stretched” to increase payload.

Economies of scale Crew size and other operating costs for ships, trains and airplanes

Many manufacturing facilities, especially those making bulk materials like chemicals, refined petroleum products, cement and paper, have labor requirements that are not greatly influenced by changes in plant capacity. This is because labor requirements of automated processes tend to be based on the complexity of the operation rather than production, and many manufacturing facilities have nearly the same basic number of processing steps and pieces of equipment, regardless of production.

Intellipedia Training

Several agencies in the Intelligence community, most notably CIA and NGA, have developed training programs to provide time to integrate social software tools into analysts’ daily work habits

Central Intelligence Agency Training

CIA University holds between 200 and 300 courses each year, training both new hires and experienced intelligence officers, as well as CIA support staff

Central Intelligence Agency Training

For later stage training of student operations officers, there is at least one classified training area at Camp Peary, near Williamsburg, Virginia. Students are selected, and their progress evaluated, in ways derived from the OSS, published as the book Assessment of Men, Selection of Personnel for the Office of Strategic Services. Additional mission training is conducted at Harvey Point, North Carolina.

Central Intelligence Agency Training

The primary training facility for the Office of Communications is Warrenton Training Center, located near Warrenton, Virginia. The facility was established in 1951 and has been used by the CIA since at least 1955.

Burson-Marsteller Training

In the Issues & Crisis Group, employees are trained to communicate the correct information during crises for a variety of different clients and issues.

Burson-Marsteller Training

In an interview in 2003, Harold Burson was quoted as saying that Burson-Marsteller has been “a training ground for the industry”, with more than 35,000 people continuing to participate in the company’s alumni network as of 2010

Burson-Marsteller Ukraine

In 2012, Burson-Marsteller was hired by Ukraine’s ruling Party of Regions (PoR), “to help the PoR communicate its activities as the governing party of Ukraine, as well as to help it explain better its position on the Yulia Tymoshenko case”, as explained by Robert Mack, a senior manager at Burson-Marsteller.

Burson-Marsteller Ukraine

The tasks of the PR company include setting up press interviews for Ukraine’s deputy prosecutor general, Renat Kuzmin, during his visits in Brussels

Burson-Marsteller Ukraine

The public relations contract coincides with a government campaign against former prime minister Yulia Tymoshenko, detained in a penal colony, and whose case has been top in the agenda of EU-Ukraine relations, delaying the signature of a DCFTA and Association Agreement between the two.

Asia-Pacific Network Information Centre APNIC training

APNIC conducts a number of training courses in a wide variety of locations around the region. These courses are designed to educate participants to proficiently configure, manage and administer their Internet services and infrastructure and to embrace current best practices.

Astronaut Training

Early in the space program, military jet test piloting and engineering training were often cited as prerequisites for selection as an astronaut at NASA, although neither John Glenn nor Scott Carpenter (of the Mercury Seven) had any university degree, in engineering or any other discipline at the time of their selection

Astronaut Training

Ellington Field is also where the Shuttle Training Aircraft is maintained and developed, although most flights of the aircraft are done out of Edwards Air Force Base.

Alan Cox Model trains

Alan Cox runs Etched Pixels, a model train company producing N gauge kits.

Clinical governance Education and training

It is no longer considered acceptable for any clinician to abstain from continuing education after qualification – too much of what is learned during training becomes quickly outdated. In NHS Trusts, the continuing professional development (CPD) of clinicians has been the responsibility of the Trust and it has also been the professional duty of clinicians to remain up-to-date.

Design of experiments Human participant experimental design constraints

Balancing the constraints are views from the medical field

General Electric Promotion and training

Thousands of people from every level of the company are trained at the Jack F. Welch Leadership Center.

Inventory Theory of constraints cost accounting

Goldratt developed the Theory of Constraints in part to address the cost-accounting problems in what he calls the “cost world.” He offers a substitute, called throughput accounting, that uses throughput (money for goods sold to customers) in place of output (goods produced that may sell or may boost inventory) and considers labor as a fixed rather than as a variable cost

Inventory Theory of constraints cost accounting

Finished goods inventories remain balance-sheet assets, but labor-efficiency ratios no longer evaluate managers and workers. Instead of an incentive to reduce labor cost, throughput accounting focuses attention on the relationships between throughput (revenue or income) on one hand and controllable operating expenses and changes in inventory on the other.

Electronic ticket – Train tickets

Amtrak started offering electronic tickets on all train routes on 30 July 2012. These tickets can be ordered over the internet and printed (as a PDF file), printed at a Quik-Trak kiosk, or at the ticket counter at the station. Electronic tickets can also be held in a smart phone and shown to the conductor using an app.

Electronic ticket – Train tickets

Several European train operators also offer self printable tickets. Often tickets can also be delivered as SMS or MMS.

Artificial intelligence – Cybernetics and brain simulation

In the 1940s and 1950s, a number of researchers explored the connection between neurology, information theory, and cybernetics

Dictionary attack – Pre-computed dictionary attack/Rainbow table attack

A more refined approach involves the use of rainbow tables, which reduce storage requirements at the cost of slightly longer lookup times

Dictionary attack – Pre-computed dictionary attack/Rainbow table attack

Pre-computed dictionary attacks, or “rainbow table attacks”, can be thwarted by the use of salt, a technique that forces the hash dictionary to be recomputed for each password sought, making precomputation infeasible provided the number of possible salt values is large enough.

Social networking service – Constraints of social networking services in education

In the past, social networking services were viewed as a distraction and offered no educational benefit. Blocking these social networks was a form of protection for students against wasting time, bullying, and invasions of privacy. In an educational setting, Facebook, for example, is seen by many instructors and educators as a frivolous, time-wasting distraction from schoolwork, and it is not uncommon to be banned in junior high or High School computer labs.

Social networking service – Constraints of social networking services in education

Cyberbullying has become an issue of concern with social networking services

Social networking service – Constraints of social networking services in education

Recent research suggests that there has been a shift in blocking the use of social networking services

Small-world network – Small-world neural networks in the brain

Both anatomical connections in the brain and the synchronization networks of cortical neurons exhibit small-world topology.

Small-world network – Small-world neural networks in the brain

A small-world network of neurons can exhibit short-term memory. A computer model developed by Solla et al. had two stable states, a property (called bistability) thought to be important in memory storage. An activating pulse generated self-sustaining loops of communication activity among the neurons. A second pulse ended this activity. The pulses switched the system between stable states: flow (recording a “memory”), and stasis (holding it).

Small-world network – Small-world neural networks in the brain

On a more general level, many large-scale neural networks in the brain, such as the visual system and brain stem, exhibit small-world properties.

Multilingualism – Centralization of Language areas in the Brain

Age of acquiring the second-or-higher language, and proficiency of use determine what specific brain regions and pathways activate when using (thinking or speaking) the language

Multilingualism – Brain plasticity in multilingualism

Consensus is still muddled; it may be a mixture of both—experiential (acquiring languages during life) and genetic (predisposition to brain plasticity).

Con Kolivas – Brain Fuck Scheduler

On 31 August 2009, Kolivas posted a new scheduler called BFS (Brain Fuck Scheduler). It is designed for desktop use and to be very simple (hence it may not scale to machines with many CPU cores well). Con Kolivas does not intend to get it merged into the mainline Linux. He has since begun maintaining the -ck patch set again.

Online identity – Relation to real-world social constraints

Ultimately, online identity cannot be completely free from the social constraints that are imposed in the real world

Online identity – Relation to real-world physical and sensory constraints

Disembodiment affords the opportunity to operate outside the constraints of a socially stigmatized disabled identity

GitHub – Limitations and constraints

According to the terms of service, if an account’s bandwidth usage significantly exceeds the average of other GitHub customers, the account’s file hosting service may be immediately disabled or throttled until bandwidth consumption is reduced. In addition, while there is no hard limit, the guideline for the maximum size of a repository is one gigabyte. Also, there is a check for files larger than 100MB in a push; if any such files exist, the push will be rejected.

Jonathan Zittrain

Previously, Zittrain was Professor of Internet Governance and Regulation at the Oxford Internet Institute of the University of Oxford and visiting professor at the New York University School of Law and Stanford Law School

Jonathan Zittrain

Zittrain works in several intersections of the Internet with law and policy including intellectual property, censorship and filtering for content control, and computer security. He founded a project at the Berkman Center for Internet and Society that develops classroom tools. He is a co-founder of Chilling Effects, a collaborative archive created to protect lawful online activity from legal threats that was created by Wendy Seltzer.

Jonathan Zittrain – Family and education

Zittrain is the son of two attorneys, Ruth A. Zittrain and Lester E. Zittrain. His father was the personal attorney of professional football star Joe Greene. In 2004 with Jennifer K. Harrison, Zittrain published The Torts Game: Defending Mean Joe Greene, a book the authors dedicated to their parents. His brother, Jeff, is an established Bay Area musician. His sister, Laurie Zittrain Eisenberg, is a scholar of the Arab and Israeli conflict and teaches at Carnegie Mellon University in Pittsburgh.

Jonathan Zittrain – Family and education

Zittrain, who grew up in the suburb of Churchill outside of Pittsburgh, was graduated in 1987 from Shady Side Academy, a private school in Pittsburgh, Pennsylvania

Jonathan Zittrain – Family and education

He was law clerk for Stephen F. Williams of the United States Court of Appeals for the District of Columbia Circuit and served with the U.S. Department of Justice and, in 1991, with the Department of State, as well as at the Senate Select Committee on Intelligence in 1992 and 1994. He was a longtime forum administrator, or sysop, for the online service CompuServe, serving for many years as the chief administrator for its private forum for all of its forum administrators.

Jonathan Zittrain – Internet filtering

The OpenNet Initiative (ONI) monitors Internet censorship by national governments. Between 2001 and 2003 at Harvard’s Berkman Center, Zittrain and Benjamin Edelman studied Internet filtering. In their tests during 2002, when Google had indexed almost 2.5 billion pages, they found sites blocked, from approximately 100 in France and Germany to 2,000 in Saudi Arabia, and 20,000 in the People’s Republic of China. The authors published a statement of issues and a call for data that year.

Jonathan Zittrain – Internet filtering

Today at ONI, with Ronald Deibert of the University of Toronto, John Palfrey, who was previously the executive director of the Berkman Center (now a professor of law and vice-dean at Harvard Law School), and Rafal Rohozinski of the University of Cambridge, Zittrain is a principal investigator.

Jonathan Zittrain – Internet filtering

In 2001, Zittrain cofounded Chilling Effects with his students and former students, including its creator and leader, Wendy Seltzer. It monitors cease and desist letters. Google directs its users to Chilling Effects when its search results have been altered at the request of a national government. Since 2002, researchers have been using the clearinghouse to study the use of cease-and-desist letters, primarily looking at DMCA 512 takedown notices, non-DMCA , and trademark claims.

Jonathan Zittrain – Copyright

In 2003 Zittrain said he was concerned that Congress will hear the same arguments after the 20-year extension passes, and that the Internet is causing a “cultural reassessment of the meaning of copyright”.

Jonathan Zittrain – Stock markets and spam

Writing with Laura Freider of Purdue University, in 2008 Zittrain published, Spam Works: Evidence from Stock Touts and Corresponding Market Activity, in the Hastings Communications and Entertainment Law Journal to document the manipulation of stock prices via spam e-mail

Jonathan Zittrain – Recent publications

Zittrain, Jonathan (April 14, 2008). The Future of the Internet and How to Stop It. Yale University Press. ISBN 0-300-12487-2. (online book)

Jonathan Zittrain – Recent publications

Deibert, Ronald J., John G. Palfrey, Rafal Rohozinski, Jonathan Zittrain (Eds.) (February 29, 2008). Access Denied: The Practice and Policy of Global Internet Filtering. MIT Press. ISBN 0-262-54196-3.

Jonathan Zittrain – Recent publications

Frieder, Laura and Zittrain, Jonathan (March 14, 2007). Spam Works: Evidence from Stock Touts and Corresponding Market Activity. Berkman Center Research Publication No. 2006-11. SSRN 920553.

Jonathan Zittrain – Recent publications

Zittrain, Jonathan (2006). “Searches and Seizures in a Networked World”. Harvard Law Review Forum (The Harvard Law Review Association) 83. Retrieved 2013-01-15.

Jonathan Zittrain – Recent publications

Zittrain, Jonathan (Spring 2006). “A History of Online Gatekeeping” (PDF). Harvard Journal of Law and Technology (Harvard Law School) 19 (2): 253. Retrieved 2008-04-20.

Jonathan Zittrain – Recent publications

Zittrain, Jonathan (Winter 2004). “Normative Principles for Evaluating Free and Proprietary Software”. University of Chicago Law Review (The University of Chicago Law School via SSRN) 71 (1). Retrieved 2008-04-20.

Types of business entity – Ukraine

DAT/??? (???????? ?????????? ?????????? Derzhavne Aktsionerne Tovaristvo): ? plc (UK), national

Types of business entity – Ukraine

TOV/TOB (?????????? ? ????????? ???????????????? Tovaristvo z Obmezhenoyu Vidpovividalnistyu): ? Ltd. (UK). Minimum capital = 1 minimum wage (UAH 960 -29.05.2011) .

Types of business entity – Ukraine

PP/?? (???????? ???????????? Privatne Pidpriemstvo): ? Ltd. (UK). No minimum capital.

Types of business entity – Ukraine

VAT/???( ?i?????? ???i?????? ?????????? Vidkrite Aktsionerne Tovaristvo) or PAT/??? (???????? ?????????? ?????????? Tovaristvo) since 29.04.2009: ? plc (UK), public. Minimum capital UAH 630,000.

Types of business entity – Ukraine

ZAT/??? (??????? ???i?????? ?????????? Zakrite Aktsionerne Tovaristvo)or PrAT/???? (???????? ?????????? ?????????? Pritvatne Aktsionerne Tovaristvo) since 29.04.2009:: ? plc (UK), private.

Types of business entity – Ukraine

Company formation is regulated by the Ukrainian Civil Code and Commercial Code, Law of Commercial companies, Law of stock companies, law and order.

Linux Foundation – Training

The Linux Foundation Training Program features instructors and content straight from the leaders of the Linux developer community.

Linux Foundation – Training

Attendees receive Linux training that is vendor-neutral, technically advanced and created with the actual leaders of the Linux development community themselves. The Linux Foundation Linux training courses, both online and in-person, give attendees the broad, foundational knowledge and networking needed to thrive in their careers.

Nairobi – Trains

The new station has a train that ferries passengers from Syokimau to the city centre cutting travel time by half

Nairobi – Trains

After the completion of the Syokimau Station, focus will be put on building other nine modern stations including those on Jogoo Road, Imara Daima and Makadara Estate.

Relational database – Constraints

Since every attribute has an associated domain, there are constraints (domain constraints)

Data integrity – Types of integrity constraints

Data integrity is normally enforced in a database system by a series of integrity constraints or rules. Three types of integrity constraints are an inherent part of the relational data model: entity integrity, referential integrity and domain integrity:

Data integrity – Types of integrity constraints

Entity integrity concerns the concept of a primary key. Entity integrity is an integrity rule which states that every table must have a primary key and that the column or columns chosen to be the primary key should be unique and not null.

Data integrity – Types of integrity constraints

Referential integrity concerns the concept of a foreign key

Data integrity – Types of integrity constraints

Domain integrity specifies that all columns in relational database must be declared upon a defined domain. The primary unit of data in the relational data model is the data item. Such data items are said to be non-decomposable or atomic. A domain is a set of values of the same type. Domains are therefore pools of values from which actual values appearing in the columns of a table are drawn.

Data integrity – Types of integrity constraints

If a database supports these features it is the responsibility of the database to insure data integrity as well as the consistency model for the data storage and retrieval. If a database does not support these features it is the responsibility of the applications to ensure data integrity while the database supports the consistency model for the data storage and retrieval.

Data integrity – Types of integrity constraints

re-usability (all applications benefit from a single centralized data integrity system)

Data integrity – Types of integrity constraints

As of 2012, since all modern databases support these features (see Comparison of relational database management systems), it has become the de-facto responsibility of the database to ensure data integrity

Neuroinformatics – Society for Neuroscience Brain Information Group

On the foundation of all of these activities, Huda Akil, the 2003 President of the Society for Neuroscience (SfN) established the Brain Information Group (BIG) to evaluate the importance of neuroinformatics to neuroscience and specifically to the SfN. Following the report from BIG, SfN also established a neuroinformatics committee.

Neuroinformatics – Society for Neuroscience Brain Information Group

In 2004, SfN announced the Neuroscience Database Gateway (NDG) as a universal resource for neuroscientists through which almost any neuroscience databases and tools may be reached

Neuroinformatics – Mouse brain mapping and simulation

Between 1995 and 2005, Henry Markram mapped the types of neurons and their connections in such a column.

Neuroinformatics – Mouse brain mapping and simulation

The Blue Brain project, completed in December 2006, aimed at the simulation of a rat neocortical column, which can be considered the smallest functional unit of the neocortex (the part of the brain thought to be responsible for higher functions such as conscious thought), containing 10,000 neurons (and 108synapses). In November 2007, the project reported the end of the first phase, delivering a data-driven process for creating, validating, and researching the neocortical column.

Neuroinformatics – Mouse brain mapping and simulation

An artificial neural network described as being “as big and as complex as half of a mouse brain” was run on an IBM blue gene supercomputer by a University of Nevada research team in 2007. A simulated time of one second took ten seconds of computer time. The researchers said they had seen “biologically consistent” nerve impulses flowed through the virtual cortex. However, the simulation lacked the structures seen in real mice brains, and they intend to improve the accuracy of the neuron model.

Neuroinformatics – The Blue Brain Project

The Blue Brain Project was founded in May 2005, and uses an 8000 processor Blue Gene/L supercomputer developed by IBM. At the time, this was one of the fastest supercomputers in the world. The project involves:

Neuroinformatics – The Blue Brain Project

Databases: 3D reconstructed model neurons, synapses, synaptic pathways, microcircuit statistics, computer model neurons, virtual neurons.

Neuroinformatics – The Blue Brain Project

Visualization: microcircuit builder and simulation results visualizator, 2D, 3D and immersive visualization systems are being developed.

Neuroinformatics – The Blue Brain Project

Simulation: a simulation environment for large scale simulations of morphologically complex neurons on 8000 processors of IBM’s Blue Gene supercomputer.

Neuroinformatics – The Blue Brain Project

Simulations and experiments: iterations between large scale simulations of neocortical microcircuits and experiments in order to verify the computational model and explore predictions.

Neuroinformatics – The Blue Brain Project

These models will be deposited in an internet database from which Blue Brain software can extract and connect models together to build brain regions and begin the first whole brain simulations.

Solar sail – Constraints

In Earth orbit, solar pressure and drag pressure are typically equal at an altitude of about 800 km, which means that a sail craft would have to operate above that altitude. Sail craft must operate in orbits where their turn rates are compatible with the orbits, which is generally a concern only for spinning disk configurations.

Solar sail – Constraints

Sail operating temperatures are a function of solar distance, sail angle, reflectivity, and front and back emissivities. A sail can be used only where its temperature is kept within its material limits. Generally, a sail can be used rather close to the sun, around 0.25 AU, or even closer if carefully designed for those conditions.

Electronic engineering – Education and training

Electronics engineers typically possess an academic degree with a major in electronic engineering. The length of study for such a degree is usually three or four years and the completed degree may be designated as a Bachelor of Engineering, Bachelor of Science, Bachelor of Applied Science, or Bachelor of Technology depending upon the university. Many UK universities also offer Master of Engineering (MEng) degrees at undergraduate level.

Electronic engineering – Education and training

The degree generally includes units covering physics, chemistry, mathematics, project management and specific topics in electrical engineering. Initially such topics cover most, if not all, of the subfields of electronic engineering. Students then choose to specialize in one or more subfields towards the end of the degree.

Electronic engineering – Education and training

Some electronics engineers also choose to pursue a postgraduate degree such as a Master of Science (MSc), Doctor of Philosophy in Engineering (PhD), or an Engineering Doctorate (EngD)

Electronic engineering – Education and training

In most countries, a Bachelor’s degree in engineering represents the first step towards certification and the degree program itself is certified by a professional body

Electronic engineering – Education and training

Fundamental to the discipline are the sciences of physics and mathematics as these help to obtain both a qualitative and quantitative description of how such systems will work

Health psychology – Training in Health Psychology

A Health Psychologist in training might be working within applied settings whilst working towards registration and chartered status

Health psychology – Training in Health Psychology

professional skills (including implementing ethical and legal standards, communication and team work),

Health psychology – Training in Health Psychology

research skills (including designing, conducting and analysing psychological research in numerous areas),

Health psychology – Training in Health Psychology

consultancy skills (including planning and evaluation),

Health psychology – Training in Health Psychology

teaching and training skills (including knowledge of designing,delivering and evaluating large and small scale training programmes),

Health psychology – Training in Health Psychology

intervention skills (including delivery and evaluation of behaviour change interventions).

Health psychology – Training in Health Psychology

All qualified Health Psychologists must also engage in and record their continuing professional development (CPD) for psychology each year throughout their career.

Color vision – Color in the human brain

Color processing begins at a very early level in the visual system (even within the retina) through initial color opponent mechanisms

Color vision – Color in the human brain

Visual information is then sent to the brain from retinal ganglion cells via the optic nerve to the optic chiasma: a point where the two optic nerves meet and information from the temporal (contralateral) visual field crosses to the other side of the brain. After the optic chiasma the visual tracts are referred to as the optic tracts, which enter the thalamus to synapse at the lateral geniculate nucleus (LGN).

Color vision – Color in the human brain

The lateral geniculate nucleus (LGN) is divided into laminae (zones), of which there are three types: the M-laminae, consisting primarily of M-cells, the P-laminae, consisting primarily of P-cells, and the koniocellular laminae

Color vision – Color in the human brain

After synapsing at the LGN, the visual tract continues on back to the primary visual cortex (V1) located at the back of the brain within the occipital lobe. Within V1 there is a distinct band (striation). This is also referred to as “striate cortex”, with other cortical visual regions referred to collectively as “extrastriate cortex”. It is at this stage that color processing becomes much more complicated.

Color vision – Color in the human brain

In V1 the simple three-color segregation begins to break down

Color vision – Color in the human brain

This is the first part of the brain in which color is processed in terms of the full range of hues found in color space.

Color vision – Color in the human brain

Anatomical studies have shown that neurons in extended V4 provide input to the inferior temporal lobe

Blue Brain Project

Blue Brain Project

Blue Brain Project

The Blue Brain Project is an attempt to create a synthetic brain by reverse-engineering the mammalian brain down to the molecular level. The aim of the project, founded in May 2005 by the Brain and Mind Institute of the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, is to study the brain’s architectural and functional principles.

Blue Brain Project

The project is headed by the founding director Henry Markram and co-directed by Felix Schürmann and Sean Hill. Using a Blue Gene supercomputer running Michael Hines’s NEURON software, the simulation does not consist simply of an artificial neural network, but involves a biologically realistic model of neurons. It is hoped that it will eventually shed light on the nature of consciousness.

Blue Brain Project

There are a number of sub-projects, including the Cajal Blue Brain, coordinated by the Supercomputing and Visualization Center of Madrid (CeSViMa), and others run by universities and independent laboratories.

Blue Brain Project – Neocortical column modelling

The initial goal of the project, completed in December 2006, was the simulation of a rat neocortical column, which is considered by some researchers to be the smallest functional unit of the neocortex (the part of the brain thought to be responsible for higher functions such as conscious thought)

Blue Brain Project – Progress

By 2005 the first single cellular model was completed. The first artificial cellular neocortical column of 10,000 cells was built by 2008. By July 2011 a cellular mesocircuit of 100 neocortical columns with a million cells in total was built. A cellular rat brain is planned for 2014 with 100 mesocircuits totalling a hundred million cells. Finally a cellular human brain is predicted possible by 2023 equivalent to 1000 rat brains with a total of a hundred billion cells.

Blue Brain Project – Progress

Now that the column is finished, the project is currently busying itself with the publishing of initial results in scientific literature, and pursuing two separate goals:

Blue Brain Project – Progress

construction of a simulation on the molecular level, which is desirable since it allows studying the effects of gene expression;

Blue Brain Project – Progress

simplification of the column simulation to allow for parallel simulation of large numbers of connected columns, with the ultimate goal of simulating a whole neocortex (which in humans consists of about 1 million cortical columns).[contradictory]

Blue Brain Project – Funding

The project is funded primarily by the Swiss government and the Future and Emerging Technologies (FET) Flagship grant from the European Commission, and secondarily by grants and some donations from private individuals. The EPFL bought the Blue Gene computer at a reduced cost because at that stage it was still a prototype and IBM was interested in exploring how different applications would perform on the machine. BBP was viewed a validation of the Blue Gene supercomputer concept.

Blue Brain Project – Documentary

A 10 part documentary is being made by film director Noah Hutton, with each installment detailing the year long workings of the project at the EPFL. Having started filming in 2009, the documentary is planned to be released in 2020, after the years of filming and editing has finished. Regular contributions from Henry Markram and the rest of the team provide an insight into the Blue Brain Project, while similar research tasks across the world are touched on.

Blue Brain Project – Cajal Blue Brain (Spain)

The Cajal Blue Brain is coordinated by the Technical University of Madrid and uses the facilities of the Supercomputing and Visualization Center of Madrid and its supercomputer Magerit. The Cajal Institute also participates in this collaboration. The main lines of research currently being pursued at Cajal Blue Brain include neurological experimentation and computer simulations. Nanotechnology, in the form of a newly designed brain microscope, plays an important role in its research plans.

Racial segregation – Bahrain

On 28 April 2007, the lower house of Bahraini Parliament passed a law banning unmarried migrant workers from living in residential areas. To justify the law MP Nasser Fadhala, a close ally of the government said “bachelors also use these houses to make alcohol, run prostitute rings or to rape children and housemaids”.

Racial segregation – Bahrain

Sadiq Rahma, technical committee head, who is a member of Al Wefaq said: “The rules we are drawing up are designed to protect the rights of both the families and the Asian bachelors (..) these labourers often have habits which are difficult for families living nearby to tolerate (..) they come out of their homes half dressed, brew alcohol illegally in their homes, use prostitutes and make the neighbourhood dirty (..) these are poor people who often live in groups of 50 or more, crammed into one house or apartment,” said Mr Rahma

Racial segregation – Bahrain

Nabeel Rajab, then BCHR vice president, said: “It is appalling that Bahrain is willing to rest on the benefits of these people’s hard work, and often their suffering, but that they refuse to live with them in equality and dignity

Declarative programming – Constraint programming

In constraint programming, relations between variables are stated in the form of constraints, specifying the properties of a solution to be found. The set of constraints is then solved by giving a value to each variable so that the solution is consistent with the maximum number of constraints.

Declarative programming – Constraint programming

Constraint programming is often used as a complement to other paradigms: functional, logical or even imperative programming.

Australopithecus afarensis – Craniodental features and brain size

Compared to the modern and extinct great apes, A. afarensis has reduced canines and molars, although they are still relatively larger than in modern humans. A. afarensis also has a relatively small brain size (~380–430 cm3) and a prognathic face (i.e. a face with forward projecting jaws).

Australopithecus afarensis – Craniodental features and brain size

The image of a bipedal hominid with a small brain and primitive face was quite a revelation to the paleoanthropological world at the time. This was due to the earlier belief that an increase in brain size was the first major hominin adaptive shift.

Australopithecus afarensis – Craniodental features and brain size

Before the discoveries of A. afarensis in the 1970s, it was widely thought that an increase in brain size preceded the shift to bipedal locomotion. This was mainly because the oldest known hominins at the time had relatively large brains (e.g. KNM-ER 1470, Homo rudolfensis, which was found just a few years before Lucy and had a cranial capacity of ~800 cm³).

Legal psychology – Training and education

In fact, some argue that specialized legal training dilutes the psychological empiricism of the researcher

Legal psychology – Training and education

A growing number of universities offer specialized training in legal psychology as either a standalone PhD program or a joint JD/PhD program. A list of American universities that offer graduate training in legal psychology can be found here on the website of the American Psychology-Law Society.

Industrial and organizational psychology – Training and training evaluation

Many organizations are using training and development as a way to attract and retain their most successful employees.

Industrial and organizational psychology – Training and training evaluation

Formative evaluations can be used to locate problems in training procedures and help I–O psychologists make corrective adjustments while the training is ongoing.

Industrial and organizational psychology – Training and training evaluation

Attitudes can be developed or changed through training programs

Industrial and organizational psychology – Training and training evaluation

The needs analysis makes it possible to identify the training program’s objectives, which in turn, represents the information for both the trainer and trainee about what is to be learned for the benefit of the organization.

Industrial and organizational psychology – Training and training evaluation

Therefore with any training program it is key to establish specify training objectives. Schultz & Schultz (2010) states that need assessment is an analysis of corporate and individual goals undertaken before designing a training program. Examples of need assessment are based on organizational, task, and work analysis is conducted using job analysis critical incidents, performance appraisal, and self-assessment techniques.(p164)

Industrial and organizational psychology – Training and training evaluation

But with any training there are always challenges that one faces. Challenges which I-O psychologists face:(p185)

Industrial and organizational psychology – Training and training evaluation

To identify the abilities required to perform increasingly complex jobs.

Industrial and organizational psychology – Training and training evaluation

To assist supervisors in the management of an ethnically diverse workforce.

Industrial and organizational psychology – Training and training evaluation

To conduct the necessary research to determine the effectiveness of training programs.

Esoteric programming language – Brainfuck

Brainfuck is designed for extreme minimalism and leads to obfuscated code, with programs containing only 8 distinct characters. e.g. the following program outputs “Hello World”:

Neurotechnology – How these help study the brain

Combinations of these methods can provide researchers with knowledge of both physiological and metabolic behaviors of loci in the brain and can be used to explain activation and deactivation of parts of the brain under specific conditions.

Neurotechnology – How these help study the brain

Some techniques combine TMS and another scanning method such as EEG to get additional information about brain activity such as cortical response.

Neurotechnology – How these help study the brain

While there are other types of research that utilize EEG, EEG has been fundamental in understanding the resting brain during sleep

Neurotechnology – How these help study the brain

While deep brain stimulation is a method to study how the brain functions per se, it provides both surgeons and neurologists important information about how the brain works when certain small regions of the basal ganglia (nuclei) are stimulated by electrical currents.

Telecommunications in Bahrain – History

When Batelco was first founded in 1981, Bahrain already had 45,627 telephone lines in use. By 1982, the number reached 50,000. In 1985, the country’s first fibre optic cable was installed. Batelco enjoyed being a monopoly in the telecommunications sector for the next two decades. By 1999, the company had around 100,000 mobile contracts.

Telecommunications in Bahrain – History

In 2002, under pressure from international bodies, Bahrain implemented its telecommunications law which included the establishment of an independent Telecommunications Regulatory Authority (TRA). In 2003, Batelco’s monopoly over the sector broke when the TRA awarded a license to MTC Vodafone, which later re-branded itself as Zain. In January 2010, VIVA (a subsidiary of STC) started operations in Bahrain.

Telecommunications in Bahrain – Telephonic services

Telephones – main lines in use: 194,200 (2006)

Telecommunications in Bahrain – Telephonic services

county comparison to the world: 124

Telecommunications in Bahrain – Telephonic services

Telephones – mobile cellular: 1,116,000 (2007)

Telecommunications in Bahrain – Telephonic services

domestic: modern fiber-optic integrated services; digital network with rapidly growing use of mobile cellular telephones

Telecommunications in Bahrain – Telephonic services

international: country code – 973; landing point for the Fire-Optic Link Around the Globe (FLAG) submarine cable network that provides links to Asia, Middle East, Europe, and US; tropospheric scatter to Qatar and UAE; microwave radio relay to Saudi Arabia; satellite earth station – 1 (2007)

Telecommunications in Bahrain – Internet service

country comparison to the world: 135

Robotics – Education and training

Robotics engineers design robots, maintain them, develop new applications for them, and conduct research to expand the potential of robotics

Robotics – Career training

Universities offer bachelors, masters, and doctoral degrees in the field of robotics. Vocational schools offer robotics training aimed at careers in robotics.

Mind control – The Korean War and the idea of Brainwashing

The Oxford English Dictionary records its earliest known English-language usage of brainwashing in an article by Edward Hunter in Miami News published on 7 October 1950. During the Korean War, Hunter, who worked at the time both as a journalist and as a U.S. intelligence agent, wrote a series of books and articles on the theme of Chinese brainwashing.

Mind control – The Korean War and the idea of Brainwashing

The Chinese term ?? (x? n?o, literally “wash brain”) was originally used to describe methodologies of coercive persuasion used under the Maoist government in China, which aimed to transform individuals with a reactionary imperialist mindset into “right-thinking” members of the new Chinese social system

Mind control – The Korean War and the idea of Brainwashing

Ford and British army Colonel James Carne also claimed that the Chinese subjected them to brainwashing techniques during their war-era imprisonment

Mind control – The Korean War and the idea of Brainwashing

More recent writers including Mikhail Heller have suggested that Lifton’s model of brainwashing may throw light on the use of mass propaganda in other communist states such as the former Soviet Union.

Mind control – The Korean War and the idea of Brainwashing

In a summary published in 1963, Edgar Schein gave a background history of the precursor origins of the brainwashing phenomenon:

Mind control – The Korean War and the idea of Brainwashing

Thought reform contains elements which are evident in Chinese culture (emphasis on interpersonal sensitivity, learning by rote and self-cultivation); in methods of extracting confessions well known in the Papal Inquisition (13th century) and elaborated through the centuries, especially by the Russian secret police; in methods of organizing corrective prisons, mental hospitals and other institutions for producing value change; in methods used by religious sects, fraternal orders, political elites or primitive societies for converting or initiating new members

Mind control – The Korean War and the idea of Brainwashing

He further asserted that for twenty years, starting in the early 1950s, the CIA and the Defense Department conducted secret research (notably including Project MKULTRA) in an attempt to develop practical brainwashing techniques, and that their attempt failed.

Mind control – The Korean War and the idea of Brainwashing

military and government laid charges of “brainwashing” in an effort to undermine detailed confessions made by U.S

Mind control – Army report debunks brainwashing of American prisoners of war

In 1956 the U.S Department of the Army published a report entitled Communist Interrogation, Indoctrination, and Exploitation of Prisoners of War which called brainwashing a “popular misconception.” The report states “exhaustive research of several government agencies failed to reveal even one conclusively documented case of ‘brainwashing’ of an American prisoner of war in Korea.”

Mind control – Army report debunks brainwashing of American prisoners of war

While US POW’s captured by North Korea were brutalized with starvation, beatings, forced death marches, exposure to extremes of temperature, binding in stress positions, and withholding of medical care, the abuse had no relation to indoctrination or collecting intelligence information “in which [North Korea was] not particularly interested.” In contrast American POW’s in the custody of the Chinese Communists did face a concerted interrogation and indoctrination program—but the Chinese did not employ deliberate physical abuse

Mind control – Army report debunks brainwashing of American prisoners of war

The Chinese elicited information using tricks such as harmless-seeming written questionnaires, followed by interviews. The “most insidious” and effective Chinese technique according to the US Army Report was a convivial display of false friendship:

Mind control – Army report debunks brainwashing of American prisoners of war

“[w]hen an American soldier was captured by the Chinese, he was given a vigorous handshake and a pat on the back

Mind control – Army report debunks brainwashing of American prisoners of war

It was this surprising, disarmingly friendly treatment, that “was successful to some degree,” the report concludes, in undermining hatred of the communists among American soldiers, in persuading some to sign anti-American confessions, and even leading a few to reject repatriation and remain in Communist China.

Inheritance (object-oriented programming) – Design constraints

Singleness: using single inheritance, a subclass can inherit from only one superclass

Inheritance (object-oriented programming) – Design constraints

Static: the inheritance hierarchy of an object is fixed at instantiation when the object’s type is selected and does not change with time. For example, the inheritance graph does not allow a Student object to become a Employee object while retaining the state of its Person superclass. (This kind of behavior, however, can be achieved with the decorator pattern.) Some have criticized inheritance, contending that it locks developers into their original design standards.

Inheritance (object-oriented programming) – Design constraints

Visibility: whenever client code has access to an object, it generally has access to all the object’s superclass data

Inheritance (object-oriented programming) – Design constraints

The composite reuse principle is an alternative to inheritance. This technique supports polymorphism and code reuse by separating behaviors from the primary class hierarchy and including specific behavior classes as required in any business domain class. This approach avoids the static nature of a class hierarchy by allowing behavior modifications at run time and allows a single class to implement behaviors buffet-style, instead of being restricted to the behaviors of its ancestor classes.

Head-mounted display – Training and simulation

A key application for HMDs is training and simulation, allowing to virtually place a trainee in a situation that is either too expensive or too dangerous to replicate in real-life. Training with HMDs cover a wide range of applications from driving, welding and spray painting, flight and vehicle simulators, dismounted soldier training, medical procedure training and more.

Qatar – Bahraini rule (1783–1868)

In 1821, as punishment for piracy, an East India Company vessel bombarded Doha, destroying the town and forcing hundreds of residents to flee. The residents of Doha had no idea why they were being attacked. As a result, Qatari rebel groups began to emerge in order to fight the Al-Khalifas and to seek independence from Bahrain. In 1825, the House of Thani was established with Sheikh Mohammed bin Thani as the first leader.

Qatar – Bahraini rule (1783–1868)

In addition to censuring Bahrain for its breach of agreement, the British Protectorate (per Colonel Lewis Pelly) asked to negotiate with a representative from Qatar.

Qatar – Bahraini rule (1783–1868)

The request carried with it a tacit recognition of Qatar’s status as distinct from Bahrain

Leadership studies – Leadership training courses

African Nutrition Leadership Programme, (South Africa).

Leadership studies – Leadership training courses

Programme de Leadership Africain en Nutrition, (Casablanca, Morocco).

Satellite Internet access – rain

Satellite communications on the Ka band (19/29 GHz) can use special techniques such as large rain margins, adaptive uplink power control and reduced bit rates during precipitation.

Satellite Internet access – rain

Rain margins are the extra communication link requirements needed to account for signal degradations due to moisture and precipitation, and are of acute importance on all systems operating at frequencies over 10 GHz.

Satellite Internet access – rain

In other words, increasing antenna gain through the use of a larger parabolic reflector is one way of increasing the overall channel gain and, consequently, the signal-to-noise (S/N) ratio, which allows for greater signal loss due to rain fade without the S/N ratio dropping below its minimum threshold for successful communication.

Satellite Internet access – rain

Modern consumer-grade dish antennas tend to be fairly small, which reduces the rain margin or increases the required satellite downlink power and cost. However, it is often more economical to build a more expensive satellite and smaller, less expensive consumer antennas than to increase the consumer antenna size to reduce the satellite cost.

Satellite Internet access – rain

Large commercial dishes of 3.7 m to 13 m diameter are used to achieve large rain margins and also to reduce the cost per bit by requiring far less power from the satellite

Satellite Internet access – rain

Modern download DVB-S2 carriers, with RCS feedback, are intended to allow the modulation method to be dynamically altered, in response to rain problems at a receive site. This allows the bit rates to be increased substantially during normal clear sky conditions, thus reducing overall costs per bit.

Educational psychology – Education and training

A person may be considered an educational psychologist after completing a graduate degree in educational psychology or a closely related field. Universities establish educational psychology graduate programs in either psychology departments or, more commonly, faculties of education.

Educational psychology – Education and training

Educational psychologists work in a variety of settings. Some work in university settings where they carry out research on the cognitive and social processes of human development, learning and education. Educational psychologists may also work as consultants in designing and creating educational materials, classroom programs and online courses.

Educational psychology – Education and training

Educational psychologists who work in k–12 school settings (closely related are school psychologists in the US and Canada) are trained at the master’s and doctoral levels. In addition to conducting assessments, school psychologists provide services such as academic and behavioral intervention, counseling, teacher consultation, and crisis intervention. However, school psychologists are generally more individual-oriented towards students.

Educational psychology – Education and training

In the UK, status as a Chartered Educational Psychologist is gained by completing:

Educational psychology – Education and training

an undergraduate degree in psychology permitting registration with the British Psychological Society

Educational psychology – Education and training

two or three years experience working with children, young people and their families.

Educational psychology – Education and training

a three-year professional doctorate in educational psychology.

Educational psychology – Education and training

In New Zealand Registered Educational Psychologist status is gained by completing:

Educational psychology – Education and training

completion of a two year masters level training programme in psychology or educational psychology

Educational psychology – Education and training

Internship places are limited and applications exceed places in most years. Since 1999 Massey University delivered the only educational psychology training program. Victoria University, Wellington started its offering of an educational psychology training programme in 2013.

Vactrain

Though the technology is currently being investigated for development of regional networks, advocates have suggested establishing vactrains for transcontinental routes to form a global network.

Vactrain

Vactrain tunnels could permit very rapid intercontinental travel. Vactrains could use gravity to assist their acceleration. If such trains went as fast as predicted, the trip between Beijing and New York would take less than 2 hours, supplanting aircraft as the world’s fastest mode of public transportation.

Vactrain

Travel through evacuated tubes allows supersonic speed without the penalty of sonic boom found with supersonic aircraft. The trains could operate faster than Mach 1 without noise.

Vactrain

However, without major advances in tunnelling and other technology, vactrains would be prohibitively expensive. Alternatives such as elevated concrete tubes with partial vacuums have been proposed to reduce costs.

Vactrain

Researchers at Southwest Jiaotong University in China are developing (in 2010) a vactrain to reach speeds of 1,000 km/h (620 mph). They say the technology can be put into operation in 10 years.

Vactrain – Early history

The modern concept of a vactrain, with evacuated tubes and maglev technology, was explored in the 1910s by American engineer Robert Goddard, who designed detailed prototypes with a university student. His train would have traveled from Boston to New York in 12 minutes, averaging 1,000 mph (1,600 km/h). The train designs were found only after Goddard’s death in 1945 and his wife filed for the patents.

Vactrain – Early history

Russian professor Boris Weinberg offered a vactrain concept in 1914 in the book Motion without friction (airless electric ways). He also built model of his proposed transport in Tomsk university in 1909.

Vactrain – 1970s to present

Vactrains made headlines during the 1970s when a leading advocate, Robert M. Salter of RAND, published a series of elaborate engineering articles in 1972 and again in 1978.

Vactrain – 1970s to present

This combination of modified (shallow) gravity train and atmospheric railway propulsion would consume little energy but limit the system to subsonic speeds, hence initial routes of tens or hundreds of miles or kilometers rather than transcontinental distances were proposed.

Vactrain – 1970s to present

Trains were to require no couplers, each car being directly welded, bolted, or otherwise firmly connected to the next, the route calling for no more bending than the flexibility of steel could easily handle. At the end of the line the train would be moved sideways into the end chamber of the return tube. The railway would have both an inner evacuated tube and an outer tunnel. At cruise depth, the space between would have enough water to float the vacuum tube, softening the ride.

Vactrain – 1970s to present

Commuter rail systems were mapped for the San Francisco and New York areas, the commuter version having longer, heavier trains, to be propelled less by air and more by gravity than the intercity version

Vactrain – 1970s to present

Salter pointed out how such a system would help reduce the environmental damage being done to the atmosphere by aviation and surface transportation. He called underground Very High Speed Transportation (tube shuttles) his nation’s “logical next step”. The plans were never taken to the next stage.

Vactrain – 1970s to present

At the time these reports were published, national prestige was an issue as Japan had been operating its showcase bullet train for several years and maglev train research was hot technology

Vactrain – 1970s to present

Starting in the late 1970s and early ’80s, the Swissmetro was proposed to leverage the invention of the experimental German Transrapid maglev train, and operate in large underground tunnels reduced to the pressure altitude of 68,000 feet (21,000 m) at which the Concorde SST was certified to fly.

Vactrain – 1970s to present

In the 1980s, Frank P. Davidson, a founder and chairman of the Channel Tunnel project, and Japanese engineer Yoshihiro Kyotani (ja) tackled the transoceanic problems with a proposal to float a tube above the ocean floor, anchored with cables. The transit tube would remain at least 1,000 feet (300 m) below the ocean surface to avoid water turbulence.

Vactrain – 1970s to present

James Powell, former co-inventor of superconducting maglev in the 1960s, has since 2001 led investigation of a concept for using a maglev vactrain for space launch (theoretically two orders of magnitude less marginal cost than present rockets), where the StarTram proposal would have vehicles reach up to 8,900 mph (14,300 km/h) to 19,600 mph (31,500 km/h) within an acceleration tunnel (lengthy to limit g-forces), considering boring through the ice sheet in Antarctica for lower anticipated expense than in rock.

Vactrain – Popular culture

The Space: 1999 TV series, featured a Lunar Vactrain

Vactrain – Popular culture

A fictional train that matched a vactrain in description was mentioned in the 1982 song “I.G.Y. (What a Beautiful World)”, by the American singer and songwriter Donald Fagen. The song includes the lyrics: “On that train, all graphite and glitter / Undersea by rail / Ninety minutes from New York to Paris”.

Direct metal laser sintering – Constraints

The aspects of size, feature details and surface finish, as well as print through error in the Z axis may be factors that should be considered prior to the use of the technology. However, by planning the build in the machine where most features are built in the x and y axis as the material is laid down, the feature tolerances can be managed well. Surfaces usually have to be polished to achieve mirror or extremely smooth finishes.

Direct metal laser sintering – Constraints

For production tooling, material density of a finished part or insert should be addressed prior to use. For example, in injection molding inserts, any surface imperfections will cause imperfections in the plastic part, and the inserts will have to mate with the base of the mold with temperature and surfaces to prevent problems.

Direct metal laser sintering – Constraints

In this process metallic support structure removal and post processing of the part generated is a time consuming process and requires use of EDM and/or grinding machines having the same level of accuracy provided by the RP machine.

Direct metal laser sintering – Constraints

When using rapid prototyping machines, .stl files, which do not include anything but raw mesh data in binary (generated from Solid Works, CATIA, or other major CAD programs) need further conversion to .cli & .sli files (the format required for non stereolithography machines). Software converts .stl file to .sli files, as with the rest of the process, there can be costs associated with this step.

Indium tin oxide – Constraints and trade-offs

The main concern about ITO is the cost

Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents   1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation

Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.

Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents   1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation

Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.

Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents   1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation

EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.

Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents   1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation

Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.

Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents   1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation

In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.

Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents   1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation

The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).

Generic programming – Basic/Unconstrained genericity

The formal generic parameters are placeholders for arbitrary class names which will be supplied when a declaration of the generic class is made, as shown in the two generic derivations below, where ACCOUNT and DEPOSIT are other class names. ACCOUNT and DEPOSIT are considered actual generic parameters as they provide real class names to substitute for G in actual use.

Generic programming – Basic/Unconstrained genericity

Within the Eiffel type system, although class LIST [G] is considered a class, it is not considered a type. However, a generic derivation of LIST [G] such as LIST [ACCOUNT] is considered a type.

Generic programming – Constrained genericity

For the list class shown above, an actual generic parameter substituting for G can be any other available class. To constrain the set of classes from which valid actual generic parameters can be chosen, a generic constraint can be specified. In the declaration of class SORTED_LIST below, the generic constraint dictates that any valid actual generic parameter will be a class which inherits from class COMPARABLE. The generic constraint ensures that elements of a SORTED_LIST can in fact be sorted.

Tim O’Reilly – Global brain

O’Reilly believes that the Internet will develop into a global brain, an intelligent network of people and machines that will function as a nervous system for the planet Earth. This phenomenon will occur because humans will use technologies such as Social Media or the Internet of things more frequently and efficiently. O’Reilly has recently presented this topic in a number of lectures.

Sandstone – Framework grains

Framework grains are sand-sized (1/16 to 2 mm diameter) detrital fragments that make up the bulk of a sandstone. These grains can be classified into several different categories based on their mineral composition:

Sandstone – Framework grains

Quartz framework grains are the dominate minerals in most sedimentary rocks; this is because they have exceptional physical properties, such as hardness and chemical stability. These physical properties allow the quartz grains to survive multiple recycling events, while also allowing the grains to display some degree of rounding. Quartz grains evolve from plutonic rock, which are felsic in origin and also from older sandstones that have been recycled.

Sandstone – Framework grains

Feldspathic framework grains are commonly the second most abundant mineral in sandstones. Feldspar can be divided into two smaller subdivisions: alkali feldspars and plagioclase feldspars. The different types of feldspar can be distinguished under a petrographic microscope. Below is a description of the different types of feldspar.

Sandstone – Framework grains

Alkali feldspar is a group of minerals in which the chemical composition of the mineral can range from KAlSi3O8 to NaAlSi3O8, this represents a complete solid solution.

Sandstone – Framework grains

Plagioclase feldspar is a complex group of solid solution minerals that range in composition from NaAlSi3O8 to CaAl2Si2O8.

Sandstone – Framework grains

Lithic framework grains are pieces of ancient source rock that have yet to weather away to individual mineral grains, called lithic fragments or clasts. Lithic fragments can be any fine-grained or coarse-grained igneous, metamorphic, or sedimentary rock. Although, the most common lithic fragment found in sedimentary rocks are clasts of volcanic rocks.

Sandstone – Framework grains

Many of these accessory grains are more dense than the silicates that make up the bulk of the rock

Parallel computing – Fine-grained, coarse-grained, and embarrassing parallelism

Applications are often classified according to how often their subtasks need to synchronize or communicate with each other. An application exhibits fine-grained parallelism if its subtasks must communicate many times per second; it exhibits coarse-grained parallelism if they do not communicate many times per second, and it is embarrassingly parallel if they rarely or never have to communicate. Embarrassingly parallel applications are considered the easiest to parallelize.

Brain-reading

Brain-reading uses the responses of multiple voxels in the brain evoked by stimulus then detected by fMRI in order to decode the original stimulus. Brain reading studies differ in the type of decoding (i.e. classification, identification and reconstruction) employed, the target (i.e. decoding visual patterns, auditory patterns, cognitive states), and the decoding algorithms (linear classification, nonlinear classification, direct reconstruction, Bayesian reconstruction, etc.) employed.

Brain-reading – Classification

In classification, a pattern of activity across multiple voxels is used to determine the particular class from which the stimulus was drawn. Many studies have classified visual stimuli, but this approach has also been used to classify cognitive states.

Brain-reading – Reconstruction

In reconstruction brain reading the aim is to create a literal picture of the image that was presented. Early studies used voxels from early visual cortex areas (V1, V2, and V3) to reconstruct geometric stimuli made up of flickering checkerboard patterns.

Brain-reading – Natural images

This brain reading approach uses three components: A structural encoding model that characterizes responses in early visual areas; a semantic encoding model that characterizes responses in anterior visual areas; and a Bayseian prior that describes the distribution of structural and semantic scene statistics.

Brain-reading – Natural images

Experimentally the procedure is for subjects to view 1750 black and white natural images that are correlated with voxel activation in their brains. Then subjects viewed another 120 novel target images, and information from the earlier scans is used reconstruct them. Natural images used include pictures of a seaside cafe and harbor, performers on a stage, and dense foliage.

Brain-reading – Other types

It is possible to track which of two forms of rivalrous binocular illusions a person was subjectively experiencing from fMRI signals. The category of event which a person freely recalls can be identified from fMRI before they say what they remembered. Statistical analysis of EEG brainwaves has been claimed to allow the recognition of phonemes, and at a 60% to 75% level color and visual shape words. It has also been shown that brain-reading can be achieved in a complex virtual environment.

Brain-reading – Accuracy

Brain-reading accuracy is increasing steadily as the quality of the data and the complexity of the decoding algorithms improve. In one recent experiment it was possible to identify which single image was being seen from a set of 120. In another it was possible to correctly identify 90% of the time which of two categories the stimulus came and the specific semantic category (out of 23) of the target image 40% of the time.

Brain-reading – Limitations

“In practice, exact reconstructions are impossible to achieve by any reconstruction algorithm on the basis of brain activity signals acquired by fMRI

Representational state transfer – Constraints

The REST architectural style describes the following six constraints applied to the architecture, while leaving the implementation of the individual components free to design:

Representational state transfer – Constraints

A uniform interface separates clients from servers. This separation of concerns means that, for example, clients are not concerned with data storage, which remains internal to each server, so that the portability of client code is improved. Servers are not concerned with the user interface or user state, so that servers can be simpler and more scalable. Servers and clients may also be replaced and developed independently, as long as the interface between them is not altered.

Representational state transfer – Constraints

The client–server communication is further constrained by no client context being stored on the server between requests. Each request from any client contains all of the information necessary to service the request, and session state is held in the client. Important to note is that the session state can be transferred by the server to another service such as a database to maintain a persistent state for a period of time and allow authentication.

Representational state transfer – Constraints

As on the World Wide Web, clients can cache responses. Responses must therefore, implicitly or explicitly, define themselves as cacheable, or not, to prevent clients reusing stale or inappropriate data in response to further requests. Well-managed caching partially or completely eliminates some client–server interactions, further improving scalability and performance.

Representational state transfer – Constraints

A client cannot ordinarily tell whether it is connected directly to the end server, or to an intermediary along the way. Intermediary servers may improve system scalability by enabling load-balancing and by providing shared caches. They may also enforce security policies.

Representational state transfer – Constraints

Servers can temporarily extend or customize the functionality of a client by the transfer of executable code. Examples of this may include compiled components such as Java applets and client-side scripts such as JavaScript. “Code on demand” is the only optional constraint of the REST architecture.

Representational state transfer – Constraints

The uniform interface between clients and servers, discussed below, simplifies and decouples the architecture, which enables each part to evolve independently. The four guiding principles of this interface are detailed below.

Representational state transfer – Constraints

One can characterise applications conforming to the REST constraints described in this section as “RESTful”. If a service violates any of the required constraints, it cannot be considered RESTful.

Representational state transfer – Constraints

Complying with these constraints, and thus conforming to the REST architectural-style, enables any kind of distributed hypermedia system to have desirable emergent properties, such as performance, scalability, simplicity, modifiability, visibility, portability, and reliability.

Flight attendant – Training

One of the most elaborate training facilities was Breech Academy which Trans World Airlines (TWA) opened in 1969 in Overland Park, Kansas

Flight attendant – Training

Safety training includes, but is not limited to: emergency passenger evacuation management, use of evacuation slides/life rafts, in-flight firefighting, survival in the jungle, sea, desert, ice, first aid, CPR, defibrillation, ditching/emergency landing procedures, decompression emergencies, Crew Resource Management and security.

Flight attendant – Training

Either or both of these may be earned depending upon the type of aircraft (propeller or turbofan) on which the holder has trained.

Flight attendant – Training

In some countries, such as France, a degree is required, together with the Certificat de Formation à la Sécurité (safety training certificate).

Digital Light Processing – The color wheel “rainbow effect”

Some people perceive these rainbow artifacts frequently, while others may never see them at all.

Digital Light Processing – The color wheel “rainbow effect”

This effect is caused by the way the eye follows a moving object on the projection

Digital Light Processing – The color wheel “rainbow effect”

“Three-chip DLP projectors have no color wheels, and thus do not manifest this [rainbow] artifact.”

Educational software – Software in corporate training and tertiary education

Earlier educational software for the important corporate and tertiary education markets was designed to run on a single desktop computer (or an equivalent user device)

Educational software – Software in corporate training and tertiary education

Virtual learning environment, LMS (learning management system)

Audi A4 – Powertrain

The B8 powertrain options are the following: engines, transmissions and drivelines: (All United Kingdom specification unless stated otherwise).(for South Africa specification).(for Australia specification).(for New Zealand specification).

Audi A4 – Powertrain

Model Engine code Years displacement / type Power@rpm Torque@rpm

Audi A4 – Powertrain

S4 quattro/3.0 TFSI CAKA 2009- 2,995 cc (183 cu in) 24v V6 supercharged 333 PS (245 kW; 328 hp) @5500-7000 440 N·m (325 lb·ft) @2900-5300

Audi A4 – Powertrain

2.0 TDI e CAGB 2009- 1,968 cc (120 cu in) 16v I4 turbo 136 PS (100 kW; 134 hp) @4200 320 N·m (236 lb·ft) @1750-2500

Audi A4 – Powertrain

2.0 TDI, 2.0 TDI quattro CAGA 2007- 1,968 cc (120 cu in) 16v I4 variable geometry turbo 143 PS (105 kW; 141 hp) @4200 320 N·m (236 lb·ft) @1750-2500

Audi A4 – Powertrain

2.0 TDI, 2.0 TDI quattro CAHA 2008- 1,968 cc (120 cu in) 16v I4 variable geometry turbo 170 PS (125 kW; 168 hp) @4200 350 N·m (258 lb·ft) @1750-2500

Audi A4 – Powertrain

The quattro permanent four-wheel drive system uses the latest Torsen T-3 centre differential, with a default 40:60 front to rear asymmetric torque distribution ratio (used first on the B7 RS4) as standard. (Previous A4 quattro models split torque with a default front:rear 50:50). The additional torque bias applied to the rear wheels helps mimic the driving dynamics of rear wheel drive cars.

Audi A4 – Powertrain

Audi was reported to stop offering 3.2L V6 models in 2010 model year, but still offers them as of August 2011 (Germany).

Audi A4 – Powertrain

All petrol engines use Fuel Stratified Injection (FSI), and all diesel engines use the common rail fuel delivery (with a pressure of 1,600 bars (23,000 psi)), with piezo injectors of their Turbocharged Direct Injection engines.

Mind uploading – Brain imaging

It may also be possible to create functional 3D maps of the brain activity, using advanced neuroimaging technology, such as functional MRI (fMRI, for mapping change in blood flow), Magnetoencephalography (MEG, for mapping of electrical currents), or combinations of multiple methods, to build a detailed three-dimensional model of the brain using non-invasive and non-destructive methods

Mind uploading – Rodent brain simulation

The initial goal of the project, completed in December 2006, was the simulation of a rat neocortical column, which can be considered the smallest functional unit of the neocortex (the part of the brain thought to be responsible for higher functions such as conscious thought), containing 10,000 neurons (and 108 synapses)

Mind uploading – Rodent brain simulation

Ultimately the goal of this prize is to generate a whole brain map which may be used in support of separate efforts to upload and possibly ‘reboot’ a mind in virtual space.

Muslim Brotherhood – Bahrain

In Bahrain, the Muslim Brotherhood is represented by the Al Eslah Society and its political wing, the Al-Menbar Islamic Society

Muslim Brotherhood – Bahrain

In March 2009, the Shi’a group The Islamic Enlightenment Society held its annual conference with the announced aim of diffusing tension between Muslim branches. The society invited national Sunni and Shi’a scholars to participate. Bahraini independent Salafi religious scholars Sheikh Salah Al Jowder and Sheikh Rashid Al Muraikhi, and Shi’a clerics Sheikh Isa Qasim and Abdulla Al Ghoraifi spoke about the importance of sectarian cooperation. Additional seminars were held throughout the year.

Muslim Brotherhood – Bahrain

In 2010, the U.S. government sponsored the visit of Al-Jowder, described as a prominent Sunni cleric, to the United States for a three-week interfaith dialogue program in several cities.

Neuroprosthetics – Traumatic Brain Injury

More than 1.7 million people in the United States suffer traumatic brain injury every year. Orthosis for TBI patients to control limb movement via devices that read neurons in brain, calculate limb trajectory, and stimulate needed motor pools to make movement. (Anderson Paper, Cole at NIH – specifically “Computer software as an orthosis for Brain Injury”,)

Sintering – Densification, vitrification and grain growth

Since densification of powders requires high temperatures, grain growth naturally occurs during sintering

Sintering – Densification, vitrification and grain growth

For densification to occur at a quick pace it is essential to have (1) an amount of liquid phase that is large in size, (2) a near complete solubility of the solid in the liquid, and (3) wetting of the solid by the liquid

Sintering – Densification, vitrification and grain growth

Densification requires constant capillary pressure where just solution-precipitation material transfer would not produce densification. For further densification, additional particle movement while the particle undergoes grain-growth and grain-shape changes occurs. Shrinkage would result when the liquid slips between particles and increase pressure at points of contact causing the material to move away from the contact areas forcing particle centers to draw near each other.

Sintering – Densification, vitrification and grain growth

The sintering of liquid-phase materials involve a fine-grained solid phase to create the needed capillary pressures proportional to its diameter and the liquid concentration must also create the required capillary pressure within range, else the process ceases

Sintering – Grain growth

Abnormal growth is when a few grains grow much larger than the remaining majority.

Sintering – Grain boundary energy/tension

The atoms in the GB are normally in a higher energy state than their equivalent in the bulk material. This is due to their more stretched bonds, which gives rise to a GB tension . This extra energy that the atoms possess is called the grain boundary energy, . The grain will want to minimize this extra energy thus striving to make the grain boundary area smaller and this change requires energy.

Sintering – Grain boundary energy/tension

“Or, in other words, a force has to be applied, in the plane of the grain boundary and acting along a line in the grain-boundary area, in order to extend the grain-boundary area in the direction of the force. The force per unit length, i.e. tension/stress, along the line mentioned is ?GB. On the basis of this reasoning it would follow:

Sintering – Grain boundary energy/tension

with dA as the increase of grain-boundary area per unit length along the line in the grain-boundary area considered.” [pg 478]

Sintering – Grain boundary energy/tension

The GB tension can also be thought of as the attractive forces between the atoms at the surface and the tension between these atoms is due to the fact that there is a larger interatomic distance between them at the surface compared to the bulk (i.e

Sintering – Grain boundary energy/tension

holds true. For solids, on the other hand, diffusion of atoms to the surface might not be sufficient and the surface tension can vary with an increase in surface area. For a solid, one can derive an expression for the change in Gibbs free energy, dG, upon the change of GB area, dA. dG is given by

Sintering – Grain boundary energy/tension

is normally expressed in units of while is normally expressed in units of since they are different physical properties.

Sintering – Reducing grain growth

This decrease in net chemical potential will decrease the grain boundary velocity and therefore grain growth.

Sintering – Reducing grain growth

Since it is beneficial for particles to reside in the grain boundaries and they exert a force in opposite direction compared to the grain boundary migration

Sintering – Reducing grain growth

where r is the radius of the particle and ? the interfacial energy of the boundary if there are N particles per unit volume their volume fraction f is

Sintering – Reducing grain growth

assuming they are randomly distributed. A boundary of unit area will intersect all particles within a volume of 2r which is 2Nr particles. So the number of particles n intersecting a unit area of grain boundary is:

Sintering – Reducing grain growth

Now assuming that the grains only grow due to the influence of curvature, the driving force of growth is where (for homogeneous grain structure) R approximates to the mean diameter of the grains. With this the critical diameter that has to be reached before the grains ceases to grow:

Sintering – Reducing grain growth

This can be reduced to so the critical diameter of the grains is dependent of the size and volume fraction of the particles at the grain boundaries.

Sintering – Reducing grain growth

It has also been shown that small bubbles or cavities can act as inclusion

Sintering – Reducing grain growth

More complicated interactions which slow grain boundary motion include interactions of the surface energies of the two grains and the inclusion and are discussed in detail by C.S. Smith.

Systems development life-cycle – Training and transition

Once a system has been stabilized through adequate testing, the SDLC ensures that proper training on the system is performed or documented before transitioning the system to its support staff and end users.

Systems development life-cycle – Training and transition

Training usually covers operational training for those people who will be responsible for supporting the system as well as training for those end users who will be using the system after its delivery to a production operating environment.

Systems development life-cycle – Training and transition

After training has been successfully completed, systems engineers and developers transition the system to its final production environment, where it is intended to be used by its end users and supported by its support and operations staff.

Occupational health psychology – Job strain and CVD

A review of 17 longitudinal studies having reasonably high internal validity found that 8 showed a significant relation between job strain and CVD and 3 more showed a nonsignificant relation

Isolated brain

An isolated brain however is more typically attached to an artificial perfusion device rather than a biological body.

Isolated brain

The brains of many different organisms have been kept alive in-vitro for hours, or in some cases days. The central nervous system of invertebrate animals is often easily maintained as they need less oxygen and to a larger extent get their oxygen from CSF; for this reason their brains are more easily maintained without perfusion. Mammalian brains on the other hand have a much lesser degree of survival without perfusion and an artificial blood perfusate is usually used.

Isolated brain

For methodological reasons, most research on isolated mammalian brains has been done with guinea pigs. These animals have a significantly larger basilar artery (a major artery of the brain) compared to rats and mice, which makes cannulation (to supply CSF) much easier.

Isolated brain – History

1812 – Julien Jean César Le Gallois (a.k.a. Legallois) put forth the original idea for resuscitating severed heads through the use of blood transfusion.

Isolated brain – History

1818 – Mary Shelley published Frankenstein; or, the Modern Prometheus

Isolated brain – History

1836 – Astley Cooper showed in rabbits that compression of the carotid and vertebral arteries leads to death of an animal; such deaths can be prevented if the circulation of oxygenated blood to the brain is rapidly restored.

Isolated brain – History

1857 – Charles Brown-Sequard decapitated a dog, waited ten minutes, attached four rubber tubes to the arterial trunks of the head, and injected blood containing oxygen by means of a syringe. Two or three minutes later voluntary movements of the eyes and muscles of the muzzle resumed. After cessation of oxygenated blood transfusion movements stopped.

Isolated brain – History

1887 – Jean Baptiste Vincent Laborde made what appears to be first recorded attempt to revive the heads of executed criminals by connecting the carotid artery of the severed human head to the carotid artery of a large dog. According to Laborde’s account, in isolated experiments a partial restoration of brain function was attained.

Isolated brain – History

1912 – Corneille Heymans maintained life in an isolated dog’s head by connecting the carotid artery and jugular vein of the severed head to the carotid artery and jugular vein of another dog. Partial functioning in the severed head was maintained for a few hours.

Isolated brain – History

1928 – Sergey Bryukhonenko showed life could be maintained in the severed head of a dog by connecting the carotid artery and jugular vein to an artificial circulation machine.

Isolated brain – History

1963 – Robert J. White isolated the brain from one monkey and attached it to the circulatory system of another animal.

Isolated brain – In philosophy

A contemporary version of the argument originally given by Descartes in Meditations on First Philosophy (i.e., that he could not trust his perceptions on the grounds that an evil demon might, conceivably, be controlling his every experience), the brain in a vat is the idea that a brain can be fooled into anything when fed appropriate stimuli.

Isolated brain – In philosophy

According to such science fiction stories, the computer would then be simulating a Virtual Reality (including appropriate responses to the brain’s own output) and the person with the “disembodied” brain would continue to have perfectly normal conscious experiences without these being related to objects or events in the real world.

Isolated brain – Growing an isolated brain

Isolated biological “brains”, grown from cultured neurons which were originally separated, have been developed. These are not the same thing as the brains of organisms, but they have been used to control some simple robotic systems.

Isolated brain – Growing an isolated brain

In 2004 Thomas DeMarse and Karl Dockendorf make an adaptive flight control with living neuronal networks on microelectrode arrays.

Isolated brain – Growing an isolated brain

Teams at the Georgia Institute of Technology and the University of Reading have created neurological entities integrated with a robot body. The brain receives input from sensors on the robot body and the resultant output from the brain provides the robot’s only motor signals.

Isolated brain – Isolated brains in fiction

Many people in the Ghost in the Shell manga and anime franchise possess cyberbrains, which can sustain a modified human brain within a cybernetic body indefinitely.

Isolated brain – Isolated brains in fiction

In the Fallout series of games, isolated brains are used to control robots.

Isolated brain – Isolated brains in fiction

The Mi-go aliens in the Cthulhu Mythos of H. P. Lovecraft, first appearing in the story “The Whisperer in Darkness” (1931), can transport humans from Earth to Pluto (and beyond) and back again by removing the subject’s brain and placing it into a “brain cylinder”, which can be attached to external devices to allow it to see, hear, and speak.

Isolated brain – Isolated brains in fiction

In Alexander Beliaev’s novel Head of Prof. Dowell (1925), Professor Dowell discovers a way of keeping heads of dead people alive and even to give them new bodies. After his death Dowell himself becomes a subject of such an experiment

Isolated brain – Isolated brains in fiction

In Donovan’s Brain (see term), the 1942 science fiction novel by Curt Siodmak (filmed three times in different versions: 1942, 1953 and 1962), the brain of a ruthless millionaire is kept alive in a tank where it grows to monstrous proportions and powers.

Isolated brain – Isolated brains in fiction

Vagrant enjoyed a sentient dragon’s life for a few decades after that, until the body grew too senile, and on the threshold of the dragon’s death the brain was removed again to assume control over a starship.

Isolated brain – Isolated brains in fiction

Additionally, in the modern Doctor Who series (2005–present), the recurring antagonists known as the Cybermen are presented as human brains (in one instance, an entire human head) encased in mechanical exoskeletons, connected by an artificial nervous system; this is ostensibly done as an “upgrade” from the comparatively fragile human body to a far more durable and longer-lasting shell

Isolated brain – Isolated brains in fiction

In the Legends of Dune prequel trilogy to the novel Dune, Cymeks are disembodied brains that wear robotic bodies.

Isolated brain – Isolated brains in fiction

In Robocop 2, the brain, eyes, and much of the nervous system of the Detroit drug lord Cain is harvested by OCP officials to use in their plans for an upgraded “Robocop 2” cyborg. These systems are stored in a vat shortly after the surgery, where the disembodied Cain can still see the remains of his former body being discarded before being placed into the fitted robotic skeleton.

Isolated brain – Isolated brains in fiction

The B’omarr Monks, of the Star Wars Universe, would surgically remove their brains from their bodies and continue their existence as a brain in a jar. They believe that cutting themselves off from civilization and all corporeal distractions leads to enlightenment. These monks are easily identified in Return of the Jedi as the spider like creature that walks past C-3PO as he enters Jabba’s Palace.

Isolated brain – Isolated brains in fiction

In the animated series Futurama, numerous technological advances have been made by the 31st century. The ability to keep heads alive in jars was invented by Ron Popeil (who has a guest cameo in “A Big Piece of Garbage”) and also apparently Dick Clark of Dick Clark’s New Year’s Rockin’ Eve fame still doing the countdown in the year 2999, has resulted in many political figures and celebrities being active; this became the writers’ excuse to feature and poke fun at celebrities in the show.

Isolated brain – Isolated brains in fiction

An example, The Ship Who Sang (1969) short story collection by science fiction author Anne McCaffrey is about the brainship Helva.

Isolated brain – Isolated brains in fiction

The Video Game Cortex Command revolves around the idea of brains being separated from physical bodies, and used to control units on a battlefield.

Isolated brain – Isolated brains in fiction

The science fantasy television series LEXX includes a robot head containing human brain tissue. Also whenever the current Divine Shadow body dies his brain is removed and placed in a device that allows him to speak and kept with rest of the Divine Predecessors.

Isolated brain – Isolated brains in fiction

In the movie Blood Diner, two cannibal brothers bring their uncle’s (isolated) brain back to life to help them in their quest to restore life to the five million year old goddess Shitaar. Their uncle’s brain instructs them to collect the required parts to resurrecting Shitaar – virgins, assorted body parts from whores, and the ingredients for a “blood buffet”.

Isolated brain – Isolated brains in fiction

In the science fiction comedy film The Man with Two Brains, the protagonist, a pioneering neurosurgeon, falls in love with a disembodied brain that was able to communicate with him telepathically.

Isolated brain – Further reading

Fleming, Chet (February 1988). If We Can Keep a Severed Head Alive…Discorporation and U.S. Patent 4,666,425. Polinym Press. ISBN 0-942287-02-9.

Isolated brain – Further reading

Librizzi L, Janigro D, De Biasi S, de Curtis M. Blood–brain barrier preservation in the in vitro isolated guinea pig brain preparation. J Neurosci Res. 2001 Oct 15;66(2):289-97. PMID 11592126

Isolated brain – Further reading

Mazzetti S, Librizzi L, Frigerio S, de Curtis M, Vitellaro-Zuccarello L. Molecular anatomy of the cerebral microvessels in the isolated guinea-pig brain. Brain Res. 2004 Feb 27;999(1):81–90. PMID 14746924

Isolated brain – Further reading

Mühlethaler M, de Curtis M, Walton K, Llinás R. The isolated and perfused brain of the guinea-pig in vitro. Eur J Neurosci. 1993 Jul 1;5(7):915-26. PMID 8281302

Isolated brain – Further reading

Kerkut GA. Studying the isolated central nervous system; a report on 35 years: more inquisitive than acquisitive. Comp Biochem Physiol A. 1989;93(1):9–24. Review. PMID 2472918

Gender – Brain

Haier and colleagues at the universities of New Mexico and California (Irvine) found, using brain mapping, that men have more grey matter related to general intelligence than women, and women have more white matter related to intelligence than men – the ratio between grey and white matter is 4% higher for men than women.

Gender – Brain

Grey matter is used for information processing, while white matter consists of the connections between processing centers. Other differences are measurable but less pronounced. Most of these differences are produced by hormonal activity, ultimately derived from the Y chromosome and sexual differentiation. However, differences that arise directly from gene activity have also been observed.

Gender – Brain

A sexual dimorphism in levels of expression in brain tissue was observed by quantitative real-time PCR, with females presenting an up to 2-fold excess in the abundance of PCDH11X transcripts. We relate these findings to sexually dimorphic traits in the human brain. Interestingly, PCDH11X/Y gene pair is unique to Homo sapiens, since the X-linked gene was transposed to the Y chromosome after the human–chimpanzee lineages split.

Gender – Brain

It also appears that in several simplified cases this coding operates differently, but in some ways equivalently, in the brains of men and women

Gender – Brain

Two of the main fields that study brain structure, biological (and other) causes and behavioral (and other) results are brain neurology and biological psychology. Cognitive science is another important discipline in the field of brain research.

Bahrain

The planned Qatar Bahrain Causeway will link Bahrain and Qatar and become the world’s longest marine causeway

Bahrain

Formerly a state, Bahrain was declared a “Kingdom” in 2002

Bahrain

As of 2012, Bahrain had a high Human Development Index (ranked 48th in the world) and was recognised by the World Bank as a high income economy. The country is a member of the United Nations, the World Trade Organisation, the Arab League, the Non-Aligned Movement and the Organization of the Islamic Conference as well as a founding member of the Cooperation Council for the Arab States of the Gulf. Bahrain was designated a major non-NATO ally by the George W. Bush administration in 2001.

Bahrain

The Bahrain Formula One Grand Prix takes place at the Bahrain International Circuit.

Bahrain – Etymology

In Arabic, Bahrayn is the dual form of bahr (“sea”), so al-Bahrayn means “the Two Seas” although which two seas were originally intended remains in dispute. The term appears five times in the Qur’an, but does not refer to the modern island—originally known to the Arabs as Awal—but rather to the oases of al-Katif and Hadjar (modern al-Hasa). It is unclear when the term began to refer exclusively to the Awal islands, but it was probably after the 15th century.

Bahrain – Etymology

Today, al-Hasa belongs to Saudi Arabia and Bahrain’s “two seas” are instead generally taken to be the bay east and west of the island, the seas north and south of the island, or the salt and fresh water present above and below the ground. In addition to wells, there are areas of the sea north of Bahrain where fresh water bubbles up in the middle of the salt water as noted by visitors since antiquity.

Bahrain – Etymology

An alternate theory with regard to Bahrain’s toponymy is offered by the al-Ahsa region, which suggests that the two seas were the Great Green Ocean and a peaceful lake on the Arabian mainland. Another supposition by al-Jawahari suggests that the more formal name Bahri (lit. “belonging to the sea”) would have been misunderstood and so was opted against.

Bahrain – Etymology

Until the late Middle Ages, “Bahrain” referred to the larger historical region of Bahrain that included Al-Ahsa, Al-Qatif (both now within the Eastern Province of Saudi Arabia) and the Awal Islands (now the Bahrain Islands). The region stretched from Basra in Iraq to the Strait of Hormuz in Oman. This was Iql?m al-Bahrayn’s “Bahrayn Province”. The exact date at which the term “Bahrain” began to refer solely to the Awal archipelago is unknown.

Bahrain – Pre-Islamic period

Bahrain may have been associated with the Dilmun civilisation, an important Bronze Age trade centre linking Mesopotamia and the Indus Valley

Bahrain – Pre-Islamic period

At this time, Bahrain comprised the southern Sassanid province along with the Persian Gulf’s southern shore.

Bahrain – Pre-Islamic period

However, Bahrain was also a center of Nestorian Christianity, including two of its bishoprics.

Bahrain – Islam, Persian and Portuguese control

Traditional Islamic accounts state that Al-?Al?? Al-Ha?rami was sent as an envoy to the Bahrain region by the prophet Muhammad in 628 AD and that Munzir ibn-Sawa al-Tamimi, the local ruler, responded to his mission and converted the entire area.

Bahrain – Islam, Persian and Portuguese control

Thereafter, the Qarmatians demanded tribute from the caliph in Baghdad, and in 930 AD sacked Mecca and Medina, bringing the sacred Black Stone back to their base in Ahsa, in medieval Bahrain, for ransom

Bahrain – Islam, Persian and Portuguese control

In 1253, the Bedouin Usfurids brought down the Uyunid dynasty, thereby gaining control over eastern Arabia, including the islands of Bahrain

Bahrain – Islam, Persian and Portuguese control

In 1753, the Huwala clan of Nasr Al-Madhkur invaded Bahrain on behalf of the Iranian Zand leader Karim Khan Zand and restored direct Iranian rule.

Bahrain – Rise of the Bani Utbah

During that time, they started purchasing date palm gardens in Bahrain; a document shows that 81 years before arrival of the Al-Khalifa, one of the shaikhs of the Al Bin Ali tribe (an offshoot of the Bani Utbah) had bought a palm garden from Mariam bint Ahmed Al Sindi in Sitra island.

Bahrain – Rise of the Bani Utbah

Later, different Arab family clans and tribes from Qatar moved to Bahrain to settle after the fall of Nasr Al-Madhkur of Bushehr

Bahrain – Al Khalifa ascendancy

In 1820, the Al Khalifa tribe were recognised by Great Britain as the rulers (“Al-Hakim” in Arabic) of Bahrain after signing a treaty relationship

Bahrain – Al Khalifa ascendancy

In 1860, the Al Khalifas used the same tactic when the British tried to overpower Bahrain. Writing letters to the Persians and Ottomans, Al Khalifas agreed to place Bahrain under the latter’s protection in March due to offering better conditions. Eventually the Government of British India overpowered Bahrain when the Persians refused to protect it. Colonel Pelly signed a new treaty with Al Khalifas placing Bahrain under British rule and protection.

Bahrain – Al Khalifa ascendancy

Other agreements in 1880 and 1892 sealed the protectorate status of Bahrain to the British.

Bahrain – Al Khalifa ascendancy

Sir Arnold Wilson, Britain’s representative in the Persian Gulf and author of The Persian Gulf, arrived in Bahrain from Muscat at this time

Bahrain – Early 20th Century reforms

In 1911, a group of Bahraini merchants demanded restrictions on the British influence in the country

Bahrain – Early 20th Century reforms

Britain’s interest in Bahrain’s development was motivated by concerns over Saudi and Iranian ambitions in the region.

Bahrain – Discovery of petroleum and WWII

The Bahrain Petroleum Company (Bapco), a subsidiary of the Standard Oil Company of California (Socal), discovered oil in 1931 and production began the following year. This was to bring rapid modernisation to Bahrain. Relations with the United Kingdom became closer, as evidenced by the British Royal Navy moving its entire Middle Eastern command from Bushehr in Iran to Bahrain in 1935.[self-published source?]

Bahrain – Discovery of petroleum and WWII

Bahrain participated in the Second World War on the Allied side, joining on 10 September 1939. On 19 October 1940, four Italian SM.82s bombers bombed Bahrain alongside Dhahran oilfields in Saudi Arabia, targeting Allied-operated oil refineries. Although minimal damage was caused in both locations, the attack forced the Allies to upgrade Bahrain’s defences, an action which further stretched Allied military resources.

Bahrain – Discovery of petroleum and WWII

In 2008, Bahrain’s king appealed to former-Bahraini Jews abroad in the US and UK to return to the country and had also offered compensation and citizenship.

Bahrain – Discovery of petroleum and WWII

In the 1950s, the National Union Committee, formed by reformists following sectarian clashes, demanded an elected popular assembly, removal of Belgrave and carried out a number of protests and general strikes. In 1965 a month-long uprising broke out after hundreds of workers at the Bahrain Petroleum Company were laid off.

Bahrain – Abandonment of Iranian claim

At this time, Britain set out to change the demographics of Bahrain

Bahrain – Abandonment of Iranian claim

Eventually Iran and Britain agreed to put the matter of Dominion of Bahrain to international judgment and requested the United Nations General Secretary take on this responsibility.

Bahrain – Abandonment of Iranian claim

Iran pressed hard for a referendum in Bahrain in the face of strong opposition from both the British and the Bahraini leaders. Their opposition was based on Al Khalifa’s view that such a move would negate 150 years of their clan’s rule in the country. In the end, as an alternative to the referendum, Iran and Britain agreed to request the United Nations conduct a survey in Bahrain that would determine the political future of the territory.

Bahrain – Abandonment of Iranian claim

Report no. 9772 was submitted to the UN General Secretary and on 11 May 1970, the United Nations Security Council endorsed Winspeare’s conclusion that an overwhelming majority of the people wished recognition of Bahrain’s identity as a fully independent and sovereign state free to decide its own relations with other states. Both Britain and Iran accepted the report and brought their dispute to a close.

Bahrain – Independence

The country had already begun diversification of its economy and benefited further from Lebanese Civil War in the 1970s and 1980s, when Bahrain replaced Beirut as the Middle East’s financial hub after Lebanon’s large banking sector was driven out of the country by the war.

Bahrain – Independence

Following the 1979 Islamic revolution in Iran, in 1981 Bahraini Sh?’a fundamentalists orchestrated a failed coup attempt under the auspices of a front organisation, the Islamic Front for the Liberation of Bahrain

Bahrain – Independence

As part of the adoption of the National Action Charter on 14 February 2002, Bahrain changed its formal name from the State (dawla) of Bahrain to the Kingdom of Bahrain.

Bahrain – Independence

Following the political liberalisation of the country, Bahrain negotiated a free trade agreement with the United States in 2004.

Bahrain – Bahraini uprising

Inspired by the regional Arab Spring, large protests started in Bahrain in early 2011.:162–3 The government initially allowed protests following a pre-dawn raid on protesters camped in Pearl Roundabout.:73–4, 88 A month later it requested security assistance from Saudi Arabia and other GCC countries and declared a three-month state of emergency.:132–9 The government then launched a crackdown on opposition that included conducting thousands of arrests

Bahrain – Geography

Bahrain is a generally flat and arid archipelago in the Persian Gulf, east of Saudi Arabia. It consists of a low desert plain rising gently to a low central escarpment with the highest point the 134 m (440 ft) Mountain of Smoke (Jabal ad Dukhan). Bahrain had a total area of 665 km2 (257 sq mi) but due to land reclamation, the area increased to 767 km2 (296 sq mi), which is slightly larger than the Isle of Man.

Bahrain – Geography

Bahrain has mild winters and very hot, humid summers

Bahrain – Geography

Four alternatives for the management of groundwater quality that are available to the water authorities in Bahrain are discussed and their priority areas are proposed, based on the type and extent of each salinisation source, in addition to groundwater use in that area.

Bahrain – Climate

The Zagros Mountains across the Persian Gulf in Iraq cause low level winds to be directed toward Bahrain. Dust storms from Iraq and Saudi Arabia transported by northwesterly winds, locally called Shamal wind, cause reduced visibility in the months of June and July.

Bahrain – Climate

Due to the Persian Gulf area’s low moisture, summers are very hot and dry. The seas around Bahrain are very shallow, heating up quickly in the summer to produce high humidity, especially at night. Summer temperatures may reach up to 50 °C (122 °F) under the right conditions. Rainfall in Bahrain is minimal and irregular. Rainfalls mostly occur in winter, with a recorded maximum of 71.8 mm (2.83 in).

Bahrain – Climate

Source: World Meteorological Organisation (UN)

Bahrain – Biodiversity

In 2003, Bahrain banned the capture of sea cows, marine turtles and dolphins within its territorial waters.

Bahrain – Biodiversity

The Hawar Islands Protected Area provides valuable feeding and breeding grounds for a variety of migratory seabirds, it is an internationally recognised site for bird migration. The breeding colony of Socotra Cormorant on Hawar Islands is the largest in the world, and the dugongs foraging around the archipelago form the second largest dugong aggregation after Australia.

Bahrain – Biodiversity

Bahrain has five designated protected areas, four of which are marine environments. They are:

Bahrain – Politics

Bahrain under the Al-Khalifa regime claims to be a constitutional monarchy headed by the King, Shaikh Hamad bin Isa Al Khalifa; however, given its dictatorial oppression and lack of parliamentary power and lack of an indepedent judiciary, most observers assert that Bahrain is an absolute monarchy

Bahrain – Politics

Bahrain has a bicameral National Assembly (al-Jam’iyyah al-Watani) consisting of the Shura Council (Majlis Al-Shura) with 40 seats and the Council of Representatives (Majlis Al-Nuwab) with 40 seats

Bahrain – Politics

In 1973, the country held its first parliamentary elections; however, two years later, the late emir dissolved the parliament and suspended the constitution after it rejected the State Security Law

Bahrain – Politics

The opening up of politics saw big gains for both Sh?a and Sunn? Islamists in elections, which gave them a parliamentary platform to pursue their policies

Bahrain – Politics

Analysts of democratisation in the Middle East cite the Islamists’ references to respect for human rights in their justification for these programmes as evidence that these groups can serve as a progressive force in the region

Bahrain – Human rights

The period between 1975 and 1999 known as the “State Security Law Era”, saw wide range of human rights violations including arbitrary arrests, detention without trial, torture and forced exile. After the Emir Hamad Al Khalifa (now king) succeeded his father Isa Al Khalifa in 1999, he introduced wide reforms and human rights improved significantly. These moves were described by Amnesty International as representing a “historic period of human rights”.

Bahrain – Human rights

Human rights conditions started to decline by 2007 when torture began to be employed again. In 2011, Human Rights Watch described the country’s human rights situation as “dismal”. Due to this, Bahrain lost some of the high International rankings it had gained before.

Bahrain – Human rights

In 2011, Bahrain was criticised for its crackdown on the Arab spring uprising. In September, a government appointed commission confirmed reports of grave human rights violations including systematic torture. The government promised to introduce reforms and avoid repeating the “painful events”. However, reports by human rights organisations Amnesty International and Human Rights Watch issued in April 2012 said the same violations were still happening.

Bahrain – Women’s rights

When Bahrain was elected to head the United Nations General Assembly in 2006 it appointed lawyer and women’s rights activist Haya bint Rashid Al Khalifa President of the United Nations General Assembly, only the third woman in history to head the world body

Bahrain – Women’s rights

In 2006, Lateefa Al Gaood became the first female MP after winning by default. The number rose to four after the 2011 by-elections. In 2008, Houda Nonoo was appointed ambassador to the United States making her the first Jewish ambassador of any Arab country. In 2011, Alice Samaan, a Christian woman was appointed ambassador to the UK.

Bahrain – Media

Bahraini journalists risk prosecution for offences which include “undermining” the government and religion. Self-censorship is widespread. Journalists were targeted by officials during anti-government protests in 2011. Three editors from opposition daily Al-Wasat (Bahraini newspaper) were sacked and later fined for publishing “false” news. Several foreign correspondents were expelled.

Bahrain – Media

Bahrain will host the Saudi-financed Alarab News Channel, expected to launch in December 2012

Bahrain – Media

By December 2011, Bahrain had 694,000 internet users. The platform “provides a welcome free space for journalists, although one that is increasingly monitored”, according to Reporters Without Borders. Rigorous filtering targets political, human rights, religious material and content deemed obscene. Bloggers and other netizens were among those detained during protests in 2011.

Bahrain – Military

The kingdom has a small but well equipped military called the Bahrain Defence Force (BDF), numbering around 13,000 personnel. The supreme commander of the Bahraini military is King Hamad bin Isa Al Khalifa and the deputy supreme commander is the Crown Prince, Salman bin Hamad bin Isa Al Khalifa.

Bahrain – Military

The Government of Bahrain has close relations with the United States, having signed a cooperative agreement with the United States Military and has provided the United States a base in Juffair since the early 1990s, although a US naval presence existed since 1948

Bahrain – Foreign relations

Relations with Iran tend to be tense as a result of a failed coup in 1981 which Bahrain blames Iran for and occasional claims of Iranian sovereignty over Bahrain by ultra-conservative elements in the Iranian public.

Bahrain – Governorates

In 1960, Bahrain comprised four municipalities including Manama, Hidd, Al Muharraq, and Riffa

Bahrain – Governorates

The first municipal elections to be held in Bahrain after independence in 1971, was in 2002. The most recent was in 2010. The municipalities are listed below:

Bahrain – Governorates

Map Former Municipality

Bahrain – Governorates

7. Rifa and Southern Region

Bahrain – Governorates

After 3 July 2002, Bahrain was split into five administrative governorates, each of which has its own governor. These governorates are:

Bahrain – Economy

According to a January 2006 report by the United Nations Economic and Social Commission for Western Asia, Bahrain has the fastest growing economy in the Arab world. Bahrain also has the freest economy in the Middle East and is twelfth freest overall in the world based on the 2011 Index of Economic Freedom published by the Heritage Foundation/Wall Street Journal.

Bahrain – Economy

Petroleum production and processing account is Bahrain’s most exported product, accounting for 60% of export receipts, 70% of government revenues, and 11% of GDP

Bahrain – Economy

In 2004, Bahrain signed the US-Bahrain Free Trade Agreement, which will reduce certain trade barriers between the two nations

Bahrain – Economy

Unemployment, especially among the young, and the depletion of both oil and underground water resources are major long-term economic problems. In 2008, the jobless figure was at 4%, with women over represented at 85% of the total. In 2007 Bahrain became the first Arab country to institute unemployment benefits as part of a series of labour reforms instigated under Minister of Labour, Dr. Majeed Al Alawi.

Bahrain – Tourism

As a tourist destination, Bahrain received over eight million visitors in 2008 though the exact number varies yearly. Most of these are from the surrounding Arab states although an increasing number hail from outside the region due to growing awareness of the kingdom’s heritage and its higher profile as a result of the Bahrain International F1 Circuit.

Bahrain – Tourism

Some of the popular historical tourist attractions in the kingdom are the Al Khamis Mosque, which is the one of the oldest mosques in the region, the Arad fort in Muharraq, Barbar temple, which is an ancient temple from the Dilmunite period of Bahrain, as well as the A’ali Burial Mounds and the Saar temple

Bahrain – Tourism

Bird watching (primarily in the Hawar Islands), scuba diving and horse riding are popular tourist activities in Bahrain. Many tourists from nearby Saudi Arabia and across the region visit Manama primarily for the shopping malls in the capital Manama, such as the Bahrain City Centre and Seef Mall in the Seef district of Manama. The Manama Souq and Gold Souq in the old district of Manama are also popular with tourists.

Bahrain – Tourism

Since 2005, Bahrain annually hosts a festival in March, titled Spring of Culture, which features internationally renowned musicians and artists performing in concerts. Manama was named the Arab Capital of Culture for 2012 and Capital of Arab Tourism for 2013 by the Arab League. The 2012 festival featured concerts starring Andrea Bocelli, Julio Iglesias and other musicians.

Bahrain – Infrastructure

Bahrain has one main international airport, the Bahrain International Airport (BIA) which is located on the island of Muharraq, in the north-east. The airport handled more than 100,000 flights and more than 8 million passengers in 2010. Bahrain’s national carrier, Gulf Air operates and bases itself in the BIA.

Bahrain – Infrastructure

Bahrain has a well-developed road network, particularly in Manama. The discovery of oil in the early 1930s accelerated the creation of multiple roads and highways in Bahrain, connecting several isolated villages, such as Budaiya, to Manama.

Bahrain – Infrastructure

To the east, a bridge connected Manama to Muharraq since 1929, a new causeway was built in 1941 which replaced the old wooden bridge. Currently there are three modern bridges connecting the two locations. Transits between the two islands peaked after the construction of the Bahrain International Airport in 1932. Ring roads and highways were later built to connect Manama to the villages of the Northern Governorate and towards towns in central and southern Bahrain.

Bahrain – Infrastructure

The King Fahd Causeway, measuring 24 km (15 mi), links Bahrain with the Saudi Arabian mainland via the island of Umm an-Nasan

Bahrain – Infrastructure

Bahrain’s port of Mina Salman is the main seaport of the country and consists of 15 berths. In 2001, Bahrain had a merchant fleet of eight ships of 1,000 GRT or over, totaling 270,784 GRT. Private vehicles and taxis are the primary means of transportation in the city.

Bahrain – Telecommunications

In 2004, Zain (a rebranded version of MTC Vodafone) started operations in Bahrain and in 2010 VIVA (owned by STC Group) become the third company to provide mobile services.

Bahrain – Telecommunications

The number of Bahraini internet users has risen from 40,000 in 2000 to 250,000 in 2008, or from 5.95 to 33 percent of the population

Bahrain – Demographics

In 2010, Bahrain’s population grew to 1.2 million, of which 568,399 were Bahraini and 666,172 were non-nationals. It had risen from 1.05 million (517,368 non-nationals) in 2007, the year when Bahrain’s population crossed the one million mark. Though a majority of the population is ethnically Arab, a sizeable number of people from South Asia live in the country. In 2008, approximately 290,000 Indian nationals lived in Bahrain, making them the single largest expatriate community in the country.

Bahrain – Demographics

Bahrain is the fourth most densely populated sovereign state in the world with a population density of 1,646 people per km2 in 2010. The only sovereign states with larger population densities are city states. Much of this population is concentrated in the north of the country with the Southern Governorate being the least densely populated part. The north of the country is so urbanised that it is considered by some to be one large metropolitan area.

Bahrain – Demographics

Baha’is constitute approximately 1% of Bahrain’s total population.

Bahrain – Demographics

Bahrain is unwilling to accept Syrian refugees.

Bahrain – Languages

Among the non-Bahraini population, many people speak Persian, the official language of Iran, or Urdu, the official language of Pakistan

Bahrain – Education

Education is compulsory for children between the ages of 6 and 14. Education is free for Bahraini citizens in public schools, with the Bahraini Ministry of Education providing free textbooks. Coeducation is not used in public schools, with boys and girls segregated into separate schools.

Bahrain – Education

1919 marked the beginning of modern public school system in Bahrain when the Al-Hidaya Al-Khalifia School for boys opened in Muharraq

Bahrain – Education

In addition to British intermediate schools, the island is served by the Bahrain School (BS)

Bahrain – Education

In addition to the Arabian Gulf University, AMA International University and the College of Health Sciences, these are the only medical schools in Bahrain.

Bahrain – Health

Private hospitals are also present throughout the country, such as the International Hospital of Bahrain.

Bahrain – Health

As a result, cases of malaria and TB have declined in recent decades with cases of contractions amongst Bahraini nationals becoming rare

Bahrain – Health

Sickle cell anaemia and thalassaemia are prevalent in the country, with a study concluding that 18% of Bahrainis are carriers of sickle cell anaemia while 24% are carriers of thalassaemia.

Bahrain – Culture

Culture of Bahrain

Bahrain – Culture

Bahrain is sometimes described as “Middle East lite” due to its combination of modern infrastructure with a Persian Gulf identity. While Islam is the main religion, Bahrainis are known for their tolerance towards the practice of other faiths.

Bahrain – Culture

Rules regarding female attire are generally relaxed compared to regional neighbours; the traditional attire of women usually include the hijab or the abaya. Although the traditional male attire is the thobe which also includes traditional headdresses such as the Keffiyeh, Ghutra and Agal, Western clothing is common in the country.

Bahrain – Culture

Bahrain legalized homosexuality in 1976, including same-sex sodomy. Another facet of Bahrain’s openness is the country’s status as the most prolific book publisher in the Arab world, with 132 books published in 2005 for a population of 700,000. In comparison, the 2005 average for the entire Arab world was seven books published per one million people, according to the United Nations Development Programme.

Bahrain – Art

The architecture of Bahrain is similar to that of its Gulf neighbours

Bahrain – Literature

Literature retains a strong tradition in the country; most traditional writers and poets write in the classical Arabic style. In recent years, the number of younger poets influenced by western literature are rising, most writing in free verse and often including political or personal content. Ali Al Shargawi, a decorated longtime poet, was described in 2011 by Al Shorfa as the literary icon of Bahrain.

Bahrain – Literature

In literature, Bahrain was the site of the ancient land of Dilmun mentioned in the Epic of Gilgamesh. Legend also states that it was the location of the Garden of Eden.

Bahrain – Music

Bahrain was also the site of the first recording studio amongst the Gulf states.

Bahrain – Sports

Association football is the most popular sport in Bahrain. Bahrain’s national football team has competed multiple times at the Asian Cup, Arab Nations Cup and played in the FIFA World Cup qualifiers, though it has never qualified for the World Cup. Bahrain has its own top-tier domestic professional football league, the Bahraini Premier League. Basketball, Rugby and horse riding are also widely popular in the country.

Bahrain – Sports

Bahrain has competed in every Summer Olympic since 1984 but has never competed in the Winter Olympics.

Bahrain – Sports

The latest edition of the Bahrain Grand Prix was the 2012 Bahrain Grand Prix, that occurred despite concerns of the safety of the teams and the ongoing protests in the country

Bahrain – Sports

In 2006, Bahrain also hosted its inaugural Australian V8 Supercar event dubbed the “Desert 400”. The V8s returned every November to the Sakhir circuit until 2010, in which it was the second event of the series. The series has not returned since. The Bahrain International Circuit also features a full-length drag strip where the Bahrain Drag Racing Club has organised invitational events featuring some of Europe’s top drag racing teams to try to raise the profile of the sport in the Middle East.

Bahrain – Sports

In April 2013 2 Zimbabwean ex-pats based in Bahrain became the first men to official circumnavigate the Bahraini mainland and Hawar Islands unassisted in single man kayaks taking 6 days. Paul Curwen and Chris Bloodworth undertook their expedition to raise funds for locally based and Zimbabwean charities.

Bahrain – Holidays

On 1 September 2006, Bahrain changed its weekend from being Thursdays and Fridays to Fridays and Saturdays, in order to have a day of the weekend shared with the rest of the world. Notable holidays in the country are listed below:

EXPRESS (data modeling language) – Algorithmic constraints

Entities and defined data types may be further constrained with WHERE rules

EXPRESS (data modeling language) – Algorithmic constraints

The EXPRESS language can describe local and global rules. For example:

EXPRESS (data modeling language) – Algorithmic constraints

SUBTYPE OF (named_unit);

EXPRESS (data modeling language) – Algorithmic constraints

WR1: (SELF\named_unit.dimensions.length_exponent = 2) AND

EXPRESS (data modeling language) – Algorithmic constraints

(SELF\named_unit.dimensions.mass_exponent = 0) AND

EXPRESS (data modeling language) – Algorithmic constraints

thermodynamic_temperature_exponent = 0) AND

EXPRESS (data modeling language) – Algorithmic constraints

(SELF\named_unit.dimensions.amount_of_substance_exponent = 0) AND

EXPRESS (data modeling language) – Algorithmic constraints

This example describes that area_unit entity must have square value of length. For this the attribute dimensions.length_exponent must be equal to 2 and all other exponents of basic SI units must be 0.

EXPRESS (data modeling language) – Algorithmic constraints

That is, it means that week value cannot exceed 7.

EXPRESS (data modeling language) – Algorithmic constraints

And so, you can describe some rules to your entities. More details on the given examples can be found in ISO 10303-41

Compressed air energy storage – Practical constraints in transportation

In order to use air storage in vehicles or aircraft for practical land or air transportation, the energy storage system must be compact and lightweight. Energy density is the engineering term that defines these desired qualities.

Brain implant

(Brain-computer interface research also includes technology such as EEG arrays that allow interface between mind and machine but do not require direct implantation of a device.)

Brain implant

Neural-implants such as deep brain stimulation and Vagus nerve stimulation are increasingly becoming routine for patients with Parkinson’s disease and clinical depression respectively, proving themselves as a boon for people with diseases which were previously regarded as incurable.

Brain implant – Purpose

Because of the complexity of neural processing and the lack of access to action potential related signals using neuroimaging techniques, the application of brain implants has been seriously limited until recent advances in neurophysiology and computer processing power.

Brain implant – Research

Research in sensory substitution has made progress in recent years. Especially in vision, due to the knowledge of the working of the visual system, eye implants (often involving some brain implants or monitoring) have been applied with demonstrated success. For hearing, cochlear implants are used to stimulate the auditory nerve directly. The vestibulocochlear nerve is part of the peripheral nervous system, but the interface is similar to that of true brain implants.

Brain implant – Research

Multiple projects have demonstrated success at recording from the brains of animals for long periods of time. As early as 1976, researchers at the NIH led by Edward Schmidt made action potential recordings of signals from Rhesus monkey motor cortexes using immovable “hatpin” electrodes, including recording from single neurons for over 30 days, and consistent recordings for greater than three years from the best electrodes.

Brain implant – Research

The “hatpin” electrodes were made of pure iridium and insulated with Parylene-c, materials that are currently used in the Cyberkinetics implementation of the Utah array. These same electrodes, or derivations thereof using the same biocompatible electrode materials, are currently used in visual prosthetics laboratories, laboratories studying the neural basis of learning, and motor prosthetics approaches other than the Cyberkinetics probes.

Brain implant – Research

A competing series of electrodes and projects is sold by Plexon including Plextrode Series of Electrodes. These are variously the “Michigan Probes”, the microwire arrays first used at MIT, and the FMAs from MicroProbe that emerged from the visual prosthetic project collaboration between Phil Troyk, David Bradley, and Martin Bak.

Brain implant – Research

Other laboratory groups produce their own implants to provide unique capabilities not available from the commercial products.

Brain implant – Research

Breakthroughs include studies of the process of functional brain re-wiring throughout the learning of a sensory discrimination, control of physical devices by rat brains, monkeys over robotic arms, remote control of mechanical devices by monkeys and humans, remote control over the movements of roaches, electronic-based neuron transistors for leeches, the first reported use of the Utah Array in a human for bidirectional signalling

Brain implant – Research

Much research is also being done on the surface chemistry of neural implants in effort to design products which minimize all negative effects that an active implant can have on the brain, and that the body can have on the function of the implant.

Brain implant – Research

Another type of neural implant that is being experimented on is Prosthetic Neuronal Memory Silicon Chips, which imitate the signal processing done by functioning neurons that allows peoples’ brains to create long-term memories.

Brain implant – Rehabilitation

Brain pacemakers have been in use since 1997 to ease the symptoms of such diseases as epilepsy, Parkinson’s Disease, dystonia and recently depression.

Brain implant – Rehabilitation

Current brain implants are made from a variety of materials such as tungsten, silicon, platinum-iridium, or even stainless steel. Future brain implants may make use of more exotic materials such as nanoscale carbon fibers (nanotubes), and polycarbonate urethane.

Brain implant – Historical research on brain implants

In 1870, Eduard Hitzig and Gustav Fritsch demonstrated that electrical stimulation of the brains of dogs could produce movements. Robert Bartholow showed the same to be true for humans in 1874. By the start of the 20th century, Fedor Krause began to systematically map human brain areas, using patients that had undergone brain surgery.

Brain implant – Historical research on brain implants

Prominent research was conducted in the 1950s. Robert G. Heath experimented with aggressive mental patients, aiming to influence his subjects’ moods through electrical stimulation.

Brain implant – Historical research on brain implants

Yale University physiologist Jose Delgado demonstrated limited control of animal and human subjects’ behaviours using electronic stimulation. He invented the stimoceiver or transdermal stimulator, a device implanted in the brain to transmit electrical impulses that modify basic behaviours such as aggression or sensations of pleasure.

Brain implant – Historical research on brain implants

Delgado was later to write a popular book on mind control, called Physical Control of the Mind, where he stated: “the feasibility of remote control of activities in several species of animals has been demonstrated […] The ultimate objective of this research is to provide an understanding of the mechanisms involved in the directional control of animals and to provide practical systems suitable for human application.”

Brain implant – Historical research on brain implants

He stated that his research was only progressively scientifically-motivated to understand how the brain works.

Brain implant – Ethical considerations

Who are good candidates to receive neural implants? What are the good uses of neural implants and what are the bad uses? Whilst deep brain stimulation is increasingly becoming routine for patients with Parkinson’s disease, there may be some behavioural side effects

Brain implant – Ethical considerations

Some transhumanists, such as Raymond Kurzweil and Kevin Warwick, see brain implants as part of a next step for humans in progress and evolution, whereas others, especially bioconservatives, view them as unnatural, with humankind losing essential human qualities

Brain implant – Brain implants in fiction and philosophy

Brain implants are now part of modern culture but there were early philosophical references of relevance as far back as René Descartes.

Brain implant – Brain implants in fiction and philosophy

In his 1638 Discourse on the Method, a study on proving self existence, Descartes wrote that a person would not know if an evil demon had trapped his mind in a black box and was controlling all inputs and outputs. Philosopher Hilary Putnam provided a modern parallel of Descartes argument in his 1989 discussion of a brain in a vat, where he argues that brains which were directly fed with an input from a computer would not know the deception from reality.

Brain implant – Brain implants in fiction and philosophy

Popular science fiction discussing brain implants and mind control became widespread in the 20th century, often with a dystopian outlook. Literature in the 1970s delved into the topic, including The Terminal Man by Michael Crichton, where a man suffering from brain damage receives an experimental surgical brain implant designed to prevent seizures, which he abuses by triggering for pleasure.

Brain implant – Brain implants in fiction and philosophy

Fear that the technology will be misused by the government and military is an early theme. In the 1981 BBC serial The Nightmare Man the pilot of a high-tech mini submarine is linked to his craft via a brain implant but becomes a savage killer after ripping out the implant.

Brain implant – Brain implants in fiction and philosophy

He also explores possible entertainment applications of brain implants such as the “simstim” (simulated stimulation) which is a device used to record and playback experiences.

Brain implant – Brain implants in fiction and philosophy

Gibson’s work led to an explosion in popular culture references to brain implants. Its influences are felt, for example, in the 1989 roleplaying game Shadowrun, which borrowed his term “datajack” to describe a brain-computer interface. The implants in Gibson’s novels and short stories formed the template for the 1995 film Johnny Mnemonic and later, The Matrix Trilogy.

Brain implant – Brain implants in fiction and philosophy

The Gap Cycle (The Gap into): In Stephen R. Donaldson’s series of novels, the use (and misuse) of “zone implant” technology is key to several plotlines.

Brain implant – Brain implants in fiction and philosophy

Pulp fiction with implants or brain implants include the novel series Typers, film Spider-Man 2, the TV series Earth: Final Conflict, and numerous computer/Video Games.

Brain implant – Brain implants in fiction and philosophy

Ghost in the Shell anime and manga franchise: Cyberbrain neural augmentation technology is the focus. Implants of powerful computers provide vastly increased memory capacity, total recall, as well as the ability to view his or her own memories on an external viewing device. Users can also initiate a telepathic conversation with other cyberbrain users, the downsides being cyberbrain hacking, malicious memory alteration, and the deliberate distortion of subjective reality and experience.

Brain implant – Brain implants in fiction and philosophy

In the Video Games PlanetSide and Chrome, players can use implants to improve their aim, run faster, and see better, along with other enhancements.

Brain implant – Brain implants in fiction and philosophy

Examples are of a helicopter pilot with implanted chips to better pilot her aircraft and analyse flight paths, velocity and spatial awareness, as well as a hacker with a brain-computer interface that allows direct access to computer networks and also to act as a ‘human proxy’ to allow an individual in a remote location to control his actions

Brain implant – Film

Brainstorm (1983): The military tries to take control over a new technology that can record and transfer thoughts, feelings, and sensations.

Brain implant – Film

Johnny Mnemonic (1995): The main character acts as a “mnemonic courier” by way of a storage implant in his brain, allowing him to carry sensitive information undetected between parties.

Brain implant – Film

The Manchurian Candidate (2004): For a means of mind control, the presidential hopeful Raymond Shaw unknowingly has a chip implanted in his head by Manchurian Global, a fictional geopolitical organization aimed at making parts of the government sleeper cells, or puppets for their monetary advancement.

Brain implant – Film

The extreme box office success of the Matrix films, combined with earlier science fiction references, have made brain implants ubiquitous in popular literature.

Brain implant – Television

Blake’s 7: Oleg Gan, a character, has a brain implant which is supposed to prevent future aggression after being convicted of killing an officer from the oppressive Federation.

Brain implant – Television

Dark Angel: The notorious Red Series use neuro-implants pushed into their brain stem at the base of their skull to amp them up and hyper-adrenalize them and make them almost unstoppable. Unfortunately the effects of the implant burn out their system between six months to a year and kill them.

Brain implant – Television

The X-Files (episode:Duane Barry, relevant to the overreaching mytharc of the series.): FBI Agent Dana Scully discovers an implant set under the skin at the back of her neck which can read her every thought and change memory through electrical signals that alter the brain chemistry.

Brain implant – Television

Star Trek franchise: Members of the Borg collective are equipped with brain implants which connect them to the Borg collective consciousness.

Brain implant – Television

Fringe: The Observers use a needle like, self-guided implant which allows them to read the minds of others at the expense of emotion. The implant also allows for short range teleportation and increases intelligence.

Brain implant – Further reading

Berger, Theodore W.; Glanzman, Dennis L., eds. (2005). Toward replacement parts for the brain: implantable biomimetic electronics as neural prostheses. Cambridge, Mass: MIT Press. ISBN 0-262-02577-9.

Brain implant – Further reading

Gross, Dominik (2009), Blessing or Curse? Nonpharmacological Neurocognitive Enhancement by “Brain Engineering”, Medicine Studies. International Journal for the History, Philosophy and Ethics of Medicine & Allied Sciences 1/4, pp. 379–391

Brain implant – Further reading

Laryionava, Katsiaryna; Gross, Dominik (2011), Public Understanding of Neural Prosthetics in Germany: Ethical, Social and Cultural Challenges, Cambridge Quarterly of Healthcare Ethics International issue, 20/3, pp. 434–439

Speech recognition – Training air traffic controllers

Speech recognition and synthesis techniques offer the potential to eliminate the need for a person to act as pseudo-pilot, thus reducing training and support personnel

Speech recognition – Training air traffic controllers

The USAF, USMC, US Army, US Navy, and FAA as well as a number of international ATC training organizations such as the Royal Australian Air Force and Civil Aviation Authorities in Italy, Brazil, and Canada are currently using ATC simulators with speech recognition from a number of different vendors.

Data center services – Technical training services

Within the umbrella of data center services, technical training services can provide skills relevant to any of the hardware, software or processes related to managing a data center, or fixing, updating, integrating or managing any of the equipment within a data center.

Constraint logic programming

In this clause, X+Y>0 is a constraint; A(X,Y), B(X), and C(Y) are literals as in regular logic programming

Constraint logic programming

In practice, satisfiability of the constraint store may be checked using an incomplete algorithm, which does not always detect inconsistency.

Constraint logic programming – Overview

Formally, constraint logic programs are like regular logic programs, but the body of clauses can contain constraints, in addition to the regular logic programming literals. As an example, X>0 is a constraint, and is included in the last clause of the following constraint logic program.

Constraint logic programming – Overview

Like in regular logic programming, evaluating a goal such as A(X,1) requires evaluating the body of the last clause with Y=1. Like in regular logic programming, this in turn requires proving the goal B(X,1). Contrary to regular logic programming, this also requires a constraint to be satisfied: X>0, the constraint in the body of the last clause.

Constraint logic programming – Overview

Rather than proceeding in the evaluation of B(X,1) and then checking whether the resulting value of X is positive afterwards, the interpreter stores the constraint X>0 and then proceeds in the evaluation of B(X,1); this way, the interpreter can detect violation of the constraint X>0 during the evaluation of B(X,1), and backtrack immediately if this is the case, rather than waiting for the evaluation of B(X,1) to conclude.

Constraint logic programming – Overview

Since the constraint store is satisfiable and no other literal is left to prove, the interpreter stops with the solution X=1, Y=1.

Constraint logic programming – Semantics

The semantics of constraint logic programs can be defined in terms of a virtual interpreter that maintains a pair during execution. The first element of this pair is called current goal; the second element is called constraint store. The current goal contains the literals the interpreter is trying to prove and may also contain some constraints it is trying to satisfy; the constraint store contains all constraints the interpreter has assumed satisfiable so far.

Constraint logic programming – Semantics

A successful termination is generated when the current goal is empty and the constraint store is satisfiable.

Constraint logic programming – Semantics

If it is a constraint, it is added to the constraint store

Constraint logic programming – Semantics

These methods can sometimes but not always prove unsatisfiability of an unsatisfiable constraint store.

Constraint logic programming – Semantics

The interpreter has proved the goal when the current goal is empty and the constraint store is not detected unsatisfiable. The result of execution is the current set of (simplified) constraints. This set may include constraints such as that force variables to a specific value, but may also include constraints like that only bound variables without giving them a specific value.

Constraint logic programming – Semantics

Formally, the semantics of constraint logic programming is defined in terms of derivations. A transition is a pair of pairs goal/store, noted . Such a pair states the possibility of going from state to state . Such a transition is possible in three possible cases:

Constraint logic programming – Semantics

an element of is a literal , there exists a clause that, rewritten using new variables, is , is with replaced by , and ; in other words, a literal can be replaced by the body of a fresh variant of a clause having the same predicate in the head, adding the body of the fresh variant and the above equalities of terms to the goal

Constraint logic programming – Semantics

and are equivalent according to the specific constraint semantics

Constraint logic programming – Semantics

A goal can be proved if there exists a derivation from to for some satisfiable constraint store

Constraint logic programming – Semantics

Actual interpreters process the goal elements in a LIFO order: elements are added in the front and processed from the front. They also choose the clause of the second rule according to the order in which they are written, and rewrite the constraint store when it is modified.

Constraint logic programming – Semantics

If the constraint store is unsatisfiable, this simplification may detect this unsatisfiability sometimes, but not always.

Constraint logic programming – Semantics

The result of evaluating a goal against a constraint logic program is defined if the goal is proved. In this case, there exists a derivation from the initial pair to a pair where the goal is empty. The constraint store of this second pair is considered the result of the evaluation. This is because the constraint store contains all constraints assumed satisfiable to prove the goal. In other words, the goal is proved for all variable evaluations that satisfy these constraints.

Constraint logic programming – Semantics

The pairwise equality of terms of two literals is often compactly denoted by : this is a shorthand for the constraints . A common variant of the semantics for constraint logic programming adds directly to the constraint store rather than to the goal.

Constraint logic programming – Terms and constraints

Different definitions of terms are used, generating different kinds of constraint logic programming: over trees, reals, or finite domains. A kind of constraint that is always present is the equality of terms. Such constraints are necessary because the interpreter adds t1=t2 to the goal whenever a literal P(…t1…) is replaced with the body of a clause fresh variant whose head is P(…t2…).

Constraint logic programming – Tree terms

Constraint logic programming with tree terms emulates regular logic programming by storing substitutions as constraints in the constraint store. Terms are variables, constants, and function symbols applied to other terms. The only considered constraints are equalities and disequalities between terms. Equality is particularly important, as constraints link t1=t2 are often generated by the interpreter. Equality constraints on terms can be simplified, that is solved, via unification:

Constraint logic programming – Tree terms

A constraint t1=t2 can be simplified if both terms are function symbols applied to other terms. If the two function symbols are the same and the number of subterms is also the same, this constraint can be replaced with the pairwise equality of subterms. If the terms are composed of different function symbols or the same functor but on different number of terms, the constraint is unsatisfiable.

Constraint logic programming – Tree terms

If one of the two terms is a variable, the only allowed value the variable can take is the other term. As a result, the other term can replace the variable in the current goal and constraint store, thus practically removing the variable from consideration. In the particular case of equality of a variable with itself, the constraint can be removed as always satisfied.

Constraint logic programming – Tree terms

In this form of constraint satisfaction, variable values are terms.

Constraint logic programming – Reals

Constraint logic programming with real numbers uses real expressions as terms. When no function symbols are used, terms are expressions over reals, possibly including variables. In this case, each variable can only take a real number as a value.

Constraint logic programming – Reals

As an example, if the first literal of the current goal is A(X+1) and the interpreter has chosen a clause that is A(Y-1):-Y=1 after rewriting is variables, the constraints added to the current goal are X+1=Y-1 and

Constraint logic programming – Reals

Reals and function symbols can be combined, leading to terms that are expressions over reals and function symbols applied to other terms

Constraint logic programming – Reals

Equality of two terms can be simplified using the rules for tree terms if none of the two terms is a real expression. For example, if the two terms have the same function symbol and number of subterms, their equality constraint can be replaced with the equality of subterms.

Constraint logic programming – Finite domains

The third class of constraints used in constraint logic programming is that of finite domains

Constraint logic programming – Finite domains

If the domain of a variable becomes empty, the constraint store is inconsistent, and the algorithm backtracks

Constraint logic programming – Finite domains

As for domains of reals, functors can be used with domains of integers. In this case, a term can be an expression over integers, a constant, or the application of a functor over other terms. A variable can take an arbitrary term as a value, if its domain has not been specified to be a set of integers or constants.

Constraint logic programming – The constraint store

The constraint store contains the constraints that are currently assumed satisfiable. It can be considered what the current substitution is for regular logic programming. When only tree terms are allowed, the constraint store contains constraints in the form t1=t2; these constraints are simplified by unification, resulting in constraints of the form variable=term; such constraints are equivalent to a substitution.

Constraint logic programming – The constraint store

While the result of a successful evaluation of a regular logic program is the final substitution, the result for a constraint logic program is the final constraint store, which may contain constraint of the form variable=value but in general may contain arbitrary constraints.

Constraint logic programming – The constraint store

The constraint store is unsatisfiable if a variable is bound to take both a value of the specific domain and a functor applied to terms.

Constraint logic programming – The constraint store

After a constraint is added to the constraint store, some operations are performed on the constraint store. Which operations are performed depends on the considered domain and constraints. For example unification is used for finite tree equalities, variable elimination for polynomial equations over reals, constraint propagation to enforce a form of local consistency for finite domains. These operations are aimed at making the constraint store simpler to be checked for satisfiability and solved.

Constraint logic programming – The constraint store

This second method is called semantic backtracking, because the semantics of the change is saved rather than the old version of the constraints only.

Constraint logic programming – Labeling

Whenever the interpreter evaluates such a literal, it performs a search over the domains of the variables of the list to find an assignment that satisfies all relevant constraints

Constraint logic programming – Labeling

As a result, using all variables mentioned in the constraint store results in checking satisfiability of the store.

Constraint logic programming – Labeling

Without the labeling literal, variables are assigned values only when the constraint store contains a constraint of the form X=value and when local consistency reduces the domain of a variable to a single value

Constraint logic programming – Labeling

Typically, constraint logic programs are written in such a way labeling literals are evaluated only after as many constraints as possible have been accumulated in the constraint store. This is because labeling literals enforce search, and search is more efficient if there are more constraints to be satisfied. A constraint satisfaction problem is typical solved by a constraint logic program having the following structure:

Constraint logic programming – Labeling

Since the constraint store contains exactly the constraints of the original constraint satisfaction problem, this operation searches for a solution of the original problem.

Constraint logic programming – Program reformulations

As a result, search only returns solutions that are consistent with it, taking advantage of the fact that additional constraints reduce the search space.

Constraint logic programming – Program reformulations

Since the constraint store after the addition of X>0 turns out to be inconsistent, the recursive evaluation of B(X) is not performed at all.

Constraint logic programming – Program reformulations

On the other hand, if the above clause is replaced by A(X,Y):-X>0,A(X),B(X), the interpreter backtracks as soon as the constraint X>0 is added to the constraint store, which happens before the evaluation of B(X) even starts.

Constraint logic programming – Constraint handling rules

In a constraint logic programming language supporting constraint handling rules, a programmer can use these rules to specify possible rewritings of the constraint store and possible additions of constraints to it

Constraint logic programming – Constraint handling rules

The first rule tells that, if B(X) is entailed by the store, the constraint A(X) can be rewritten as C(X). As an example, N*X>0 can be rewritten as X>0 if the store implies that N>0. The symbol <=> resembles equivalence in logic, and tells that the first constraint is equivalent to the latter. In practice, this implies that the first constraint can be replaced with the latter.

Constraint logic programming – Constraint handling rules

The second rule instead specifies that the latter constraint is a consequence of the first, if the constraint in the middle is entailed by the constraint store. As a result, if A(X) is in the constraint store and B(X) is entailed by the constraint store, then C(X) can be added to the store. Differently from the case of equivalence, this is an addition and not a replacement: the new constraint is added but the old one remains.

Constraint logic programming – Constraint handling rules

Equivalence allows for simplifying the constraint store by replacing some constraints with simpler ones; in particular, if the third constraint in an equivalence rule is true, and the second constraint is entailed, the first constraint is removed from the constraint store. Inference allows for the addition of new constraints, which may lead to proving inconsistency of the constraint store, and may generally reduce the amount of search needed to establish its satisfiability.

Constraint logic programming – Constraint handling rules

In this example, the choice of value for a variable is implemented using clauses of logic programming; however, it can be encoded in constraint handling rules using an extension called disjunctive constraint handling rules or CHR?.

Constraint logic programming – Bottom-up evaluation

The standard strategy of evaluation of logic programs is top-down and depth-first: from the goal, a number of clauses are identified as being possibly able to prove the goal, and recursion over the literals of their bodies is performed

Constraint logic programming – Bottom-up evaluation

The bottom-up evaluation strategy maintains the set of facts proved so far during evaluation. This set is initially empty. With each step, new facts are derived by applying a program clause to the existing facts, and are added to the set. For example, the bottom up evaluation of the following program requires two steps:

Constraint logic programming – Bottom-up evaluation

The set of consequences is initially empty. At the first step, A(q) is the only clause whose body can be proved (because it is empty), and A(q) is therefore added to the current set of consequences. At the second step, since A(q) is proved, the second clause can be used and B(q) is added to the consequences. Since no other consequence can be proved from {A(q),B(q)}, execution terminates.

Constraint logic programming – Bottom-up evaluation

The advantage of the bottom-up evaluation over the top-down one is that cycles of derivations do not produce an infinite loop. This is because adding a consequence to the current set of consequences that already contains it has no effect. As an example, adding a third clause to the above program generates a cycle of derivations in the top-down evaluation:

Constraint logic programming – Bottom-up evaluation

For example, while evaluating all answers to the goal A(X), the top-down strategy would produce the following derivations:

Constraint logic programming – Bottom-up evaluation

In other words, the only consequence A(q) is produced first, but then the algorithm cycles over derivations that do not produce any other answer. More generally, the top-down evaluation strategy may cycle over possible derivations, possibly when other ones exist.

Constraint logic programming – Bottom-up evaluation

The bottom-up strategy does not have the same drawback, as consequences that were already derived has no effect. On the above program, the bottom-up strategy starts adding A(q) to the set of consequences; in the second step, B(X):-A(X) is used to derive B(q); in the third step, the only facts that can be derived from the current consequences are A(q) and B(q), which are however already in the set of consequences. As a result, the algorithm stops.

Constraint logic programming – Bottom-up evaluation

In general, every clause that only contains constraints in the body is considered a fact

Constraint logic programming – Bottom-up evaluation

As described, the bottom-up approach has the advantage of not considering consequences that have already been derived. However, it still may derive consequences that are entailed by those already derived while not being equal to any of them. As an example, the bottom up evaluation of the following program is infinite:

Constraint logic programming – Bottom-up evaluation

The bottom-up evaluation algorithm first derives that A(X) is true for X=0 and X>0

Constraint logic programming – Concurrent constraint logic programming

Concurrent constraint logic programming

Constraint logic programming – Concurrent constraint logic programming

The concurrent versions of constraint logic programming are aimed at programming concurrent processes rather than solving constraint satisfaction problems. Goals in constraint logic programming are evaluated concurrently; a concurrent process is therefore programmed as the evaluation of a goal by the interpreter.

Constraint logic programming – Concurrent constraint logic programming

Most notably, this difference affects how the interpreter behaves when more than one clause is applicable: non-concurrent constraint logic programming recursively tries all clauses; concurrent constraint logic programming chooses only one

Constraint logic programming – Applications

Constraint logic programming has been applied to a number of fields, such as civil engineering, mechanical engineering, digital circuit verification, automated timetabling, air traffic control, finance, and others.

Constraint logic programming – History

Constraint logic programming was introduced by Jaffar and Lassez in 1987. They generalized the observation that the term equations and disequations of Prolog II were a specific form of constraints, and generalized this idea to arbitrary constraint languages. The first implementations of this concept were Prolog III, CLP(R), and CHIP.

Language – The brain and language

The brain is the coordinating center of all linguistic activity; it controls both the production of linguistic cognition and of meaning and the mechanics of speech production. Nonetheless, our knowledge of the neurological bases for language is quite limited, though it has advanced considerably with the use of modern imaging techniques. The discipline of linguistics dedicated to studying the neurological aspects of language is called neurolinguistics.

Language – The brain and language

People with a lesion in this area of the brain develop receptive aphasia, a condition in which there is a major impairment of language comprehension, while speech retains a natural-sounding rhythm and a relatively normal sentence structure

Language – The brain and language

With technological advances in the late 20th century, neurolinguists have also adopted non-invasive techniques such as functional magnetic resonance imaging (fMRI) and electrophysiology to study language processing in individuals without impairments.

Holography – Rainbow holograms

The rainbow holography recording process uses a horizontal slit to eliminate vertical parallax in the output image

Holography – Rainbow holograms

The holograms found on credit cards are examples of rainbow holograms. These are technically transmission holograms mounted onto a reflective surface like a metalized polyethylene terephthalate substrate commonly known as PET.

Brain transplant

A brain transplant or whole-body transplant is a procedure in which the brain of one organism is transplanted into the body of another. It is a procedure distinct from head transplantation, which involves transferring the entire head to a new body, as opposed to the brain only. Theoretically, a person with advanced organ failure could be given a new and functional body while keeping their own personality and memories.

Brain transplant

Brain transplants and similar concepts have been explored in various forms of fiction.

Brain transplant – Existing challenges

One of the most significant barriers to the procedure is the inability of nerve tissue to heal properly; scarred nerve tissue does not transmit signals well (this is why a spinal cord injury is so devastating). However, recent research at the Wistar Institute of the University of Pennsylvania involving tissue-regenerating mice (known as MRL mice) may provide pointers for further research as to how to regenerate nerves without scarring.

Brain transplant – Existing challenges

Alternatively a brain–computer interface can be used connecting the subject to their own body. A study using a monkey as a subject shows that it is possible to directly use commands from the brain, bypass the spinal cord and enable hand function. An advantage is that this interface can be adjusted after the surgical interventions are done where nerves can not be reconnected without surgery.

Brain transplant – Existing challenges

Also, for the procedure to be practical, the age of the donated body must be sufficient: an adult brain cannot fit into a skull that has not reached its full growth, which occurs at age 9–12 years.

Brain transplant – Existing challenges

There is an advantage, however, with respect to the immune response. The brain is an immunologically privileged organ, so rejection would not be a problem. (When other organs are transplanted, aggressive rejection can occur; this is a major difficulty with kidney and liver transplants.)

Brain transplant – Partial brain transplant

In 1982 Dr. Dorothy T. Krieger, chief of endocrinology at Mount Sinai Medical Center in New York City, achieved success with a partial brain transplant in mice.

Brain transplant – Partial brain transplant

In 1998, a team of surgeons from the University of Pittsburgh Medical Center attempted to transplant a group of brain cells to Alma Cerasini, who had suffered a severe stroke that caused the loss of mobility in her right limbs as well as limited speech. The team’s hope was that the cells would correct the listed damage.

Brain transplant – Similar concepts

The whole-body transplant is just one of several means of putting a consciousness into a new body that have been explored by both scientists and writers.

Brain transplant – Similar concepts

Since there is no movement of the brain(s), however, this is not quite the same as a whole-body transplant.

Brain transplant – Similar concepts

In the horror film The Skeleton Key, the protagonist, Caroline, discovers that the old couple she is looking after are poor Voodoo witch doctors who stole the bodies of two young, privileged children in their care using a ritual which allows a soul to swap bodies. Unfortunately the evil old couple also trick Caroline and their lawyer into the same procedure, and both end up stuck in old dying bodies unable to speak while the witch doctors walk off with their young bodies.

Brain transplant – Similar concepts

In Anne Rice’s The Tale of the Body Thief, the vampire Lestat discovers a man, Raglan James, who can will himself into another person’s body. Lestat demands that the procedure be used on him to allow him to be human once again, but soon finds that he has made an error and is forced to recapture James in his vampiric form so he can take his body back.

Brain transplant – Similar concepts

In the Star Wars Expanded Universe, Emperor Palpatine is able to transfer his consciousness into clone bodies. In a sense, this allows him to return to life after the Battle of Endor, as well as other events where his current body dies. The clone bodies aren’t quite as good as his original body and waste quickly due to the decaying power of the Dark Side of the Force. Upon realizing this, he tries to take over the body of Anakin Solo, but is unsuccessful and eventually meets his final end.

Brain transplant – Similar concepts

While the ultimate goal of transplanting is transfer of the brain to a new body optimized for it by genetics, proteomics, and/or other medical procedures, in uploading the brain itself moves nowhere and may even be physically destroyed or discarded; the goal is rather to duplicate the information patterns contained within the brain.

Brain transplant – Similar concepts

In one episode of Star Trek, Spock’s Brain is stolen and installed in a large computer-like structure; and in “I, Mudd” Uhura is offered immortality in an android body

Educational technology – Teacher Training

This has become a significant barrier to effective training because the traditional methods of teaching have clashed with what is now expected in the present workplace

Educational technology – Teacher Training

The ways in which teachers are taught to use technology is also outdated because the primary focus of training is on computer literacy, rather than the deeper, more essential understanding and mastery of technology for information processing, communication, and problem solving

Educational technology – Teacher Training

Teacher training faces another drawback when it comes to one’s mindset on the integration of technology into the curriculum

Constraint programming

Constraints are usually embedded within a programming language or provided via separate software libraries.

Constraint programming

Constraint programming can be expressed in the form of constraint logic programming, which embeds constraints into a logic program. This variant of logic programming is due to Jaffar and Lassez, who extended in 1987 a specific class of constraints that were introduced in Prolog II. The first implementations of constraint logic programming were Prolog III, CLP(R), and CHIP.

Constraint programming

Instead of logic programming, constraints can be mixed with functional programming, term rewriting, and imperative languages. Programming languages with built-in support for constraints include Oz (functional programming) and Kaleidoscope (imperative programming). Mostly, constraints are implemented in imperative languages via constraint solving toolkits, which are separate libraries for an existing imperative language.

Constraint programming – Constraint logic programming

Constraint programming is an embedding of constraints in a host language. The first host languages used were logic programming languages, so the field was initially called constraint logic programming. The two paradigms share many important features, like logical variables and backtracking. Today most Prolog implementations include one or more libraries for constraint logic programming.

Constraint programming – Constraint logic programming

The difference between the two is largely in their styles and approaches to modeling the world. Some problems are more natural (and thus, simpler) to write as logic programs, while some are more natural to write as constraint programs.

Constraint programming – Constraint logic programming

The constraint programming approach is to search for a state of the world in which a large number of constraints are satisfied at the same time. A problem is typically stated as a state of the world containing a number of unknown variables. The constraint program searches for values for all the variables.

Constraint programming – Constraint logic programming

Temporal concurrent constraint programming (TCC) and non-deterministic temporal concurrent constraint programming (NTCC) are variants of constraint programming that can deal with time. Recently NTCC has proved to be a useful framework for describing and modelling biological systems .

Constraint programming – Domains

The constraints used in constraint programming are typically over some specific domains. Some popular domains for constraint programming are:

Constraint programming – Domains

boolean domains, where only true/false constraints apply (SAT problem)

Constraint programming – Domains

linear domains, where only linear functions are described and analyzed (although approaches to non-linear problems do exist)

Constraint programming – Domains

Finite domains is one of the most successful domains of constraint programming. In some areas (like operations research) constraint programming is often identified with constraint programming over finite domains.

Constraint programming – Domains

All of the above examples are commonly solved by satisfiability modulo theories (SMT) solvers.

Constraint programming – Domains

Finite domain solvers are useful for solving constraint satisfaction problems, and are often based on arc consistency or one of its approximations.

Constraint programming – Domains

The syntax for expressing constraints over finite domains depends on the host language. The following is a Prolog program that solves the classical alphametic puzzle SEND+MORE=MONEY in constraint logic programming:

Constraint programming – Domains

% This code works in both YAP and SWI-Prolog using the environment-supplied

Constraint programming – Domains

% CLPFD constraint solver library. It may require minor modifications to work

Constraint programming – Domains

% in other Prolog environments or using other constraint solvers.

Constraint programming – Domains

:- use_module(library(clpfd)).

Constraint programming – Domains

S #\= 0, % Constraint: S must be different from 0

Constraint programming – Domains

Constraint propagation may solve the problem by reducing all domains to a single value, it may prove that the problem has no solution by reducing a domain to the empty set, but may also terminate without proving satisfiability or unsatisfiability

Constraint programming – Logic programming based constraint logic languages

CHIP V5 (Prolog-based, also includes C++ and C libraries, proprietary)

Constraint programming – Logic programming based constraint logic languages

ECLiPSe (Prolog-based, open source)

Constraint programming – Logic programming based constraint logic languages

SICStus (Prolog-based, proprietary)

Constraint programming – Constraint programming libraries for imperative programming languages

Constraint programming is often realized in imperative programming via a separate library. Some popular libraries for constraint programming are:

Constraint programming – Constraint programming libraries for imperative programming languages

Artelys Kalis (C++ library, Xpress-Mosel module, proprietary)

Constraint programming – Constraint programming libraries for imperative programming languages

Comet (C style language for constraint programming, constraint-based local search and mathematical programming, free binaries available for academic use)

Constraint programming – Constraint programming libraries for imperative programming languages

IBM ILOG CP (C++ library, proprietary) and CP Optimizer (C++, Java, .NET libraries, proprietary) successor of ILOG Solver, which was considered the market leader in commercial constraint programming software as of 2006

Constraint programming – Constraint programming libraries for imperative programming languages

JaCoP (Java library, open source) available here

Constraint programming – Constraint programming libraries for imperative programming languages

Turtle++ (C++ library – inspired by the Turtle Language, free software)

Constraint programming – Some languages that support constraint programming

AIMMS, an algebraic modeling language with support for constraint programming.

Constraint programming – Some languages that support constraint programming

Alma-0 a small, strongly typed, constraint language with a limited number of features inspired by logic programming, supporting imperative programming.

Constraint programming – Some languages that support constraint programming

AMPL, an algebraic modeling language with support for constraint programming.

Constraint programming – Some languages that support constraint programming

Bertrand a language for building constraint programming systems.

Constraint programming – Some languages that support constraint programming

Common Lisp via Screamer (a free software library which provides backtracking and CLP(R), CHiP features).

Constraint programming – Some languages that support constraint programming

Kaleidoscope, an object-oriented imperative constraint programming language.

Constraint programming – Some languages that support constraint programming

Curry (Haskell based, with free implementations)

Constraint programming – Some languages that support constraint programming

SystemVerilog Computer hardware simulation language has built in constraint solver.

Ground effect train

Ground effect train

Ground effect train

A ground effect train is an alternative to a magnetic levitation (maglev) train. In both cases the object is to prevent the vehicle from making contact with the ground. Whereas a maglev train accomplishes this through the use of magnetism, a ground effect train uses an air cushion; either in the manner of a hovercraft (as in hovertrains) or using the “wing-in-ground” design.

Ground effect train

Whereas the magnetic levitation train can be built to operate in a vacuum to minimise air resistance, the ground effect train must operate in an atmosphere in order for the air cushion to exist.

Clinical psychology – Training and certification to practice

Clinical psychologists study a generalist program in psychology plus postgraduate training and/or clinical placement and supervision. The length of training differs across the world, ranging from four years plus post-Bachelors supervised practice to a doctorate of three to six years which combines clinical placement.

Clinical psychology – Training and certification to practice

About half of all clinical psychology graduate students are being trained in Ph.D

Clinical psychology – Training and certification to practice

It is not unusual for applicants to apply several times before being accepted onto a training course as only about one-fifth of applicants are accepted each year

Clinical psychology – Training and certification to practice

The practice of clinical psychology requires a license in the United States, Canada, the United Kingdom, and many other countries. Although each of the US states is somewhat different in terms of requirements and licenses, there are three common elements:

Clinical psychology – Training and certification to practice

Graduation from an accredited school with the appropriate degree

Clinical psychology – Training and certification to practice

Completion of supervised clinical experience or internship

Clinical psychology – Training and certification to practice

All US state and Canadian province licensing boards are members of the Association of State and Provincial Psychology Boards (ASPPB) which created and maintains the Examination for Professional Practice in Psychology (EPPP)

Clinical psychology – Training and certification to practice

In the UK, registration as a clinical psychologist with the Health Professions Council (HPC) is necessary. The HPC is the statutory regulator for practitioner psychologists in the UK. In the UK the following titles are restricted by law: “registered psychologist” and “practitioner psychologist”; in addition, the specialist title “clinical psychologist” is also restricted by law.

Age of Enlightenment – Ukraine

In Ukrainian philosophy stands early stage of the Enlightenment, which occurs in the era of the emergence of capitalism (the first quarter of 18th-century)

Maglev – Comparison with conventional trains

Speeds Maglev allows higher top speeds than conventional rail, but at least experimentally, wheel-based high-speed trains have been able to demonstrate similar speeds.

Maglev – Comparison with conventional trains

Maintenance Requirements Of Electronic Versus Mechanical Systems: Maglev trains currently in operation have demonstrated the need for nearly insignificant guideway maintenance

Maglev – Comparison with conventional trains

All-Weather Operations: While maglev trains currently in operation are not stopped, slowed, or have their schedules affected by snow, ice, severe cold, rain or high winds, they have not been operated in the wide range of conditions that traditional friction-based rail systems have operated. Maglev vehicles accelerate and decelerate faster than mechanical systems regardless of the slickness of the guideway or the slope of the grade because they are non-contact systems.

Maglev – Comparison with conventional trains

By contrast conventional high speed trains such as the TGV are able to run at reduced speeds on existing rail infrastructure, thus reducing expenditure where new infrastructure would be particularly expensive (such as the final approaches to city terminals), or on extensions where traffic does not justify new infrastructure

Maglev – Comparison with conventional trains

Efficiency: Due to the lack of physical contact between the track and the vehicle, maglev trains experience no rolling resistance, leaving only air resistance and electromagnetic drag, potentially improving power efficiency.

Maglev – Comparison with conventional trains

Weight: The weight of the electromagnets in many EMS and EDS designs seems like a major design issue to the uninitiated

Maglev – Comparison with conventional trains

Noise: Because the major source of noise of a maglev train comes from displaced air, maglev trains produce less noise than a conventional train at equivalent speeds. However, the psychoacoustic profile of the maglev may reduce this benefit: a study concluded that maglev noise should be rated like road traffic while conventional trains have a 5–10 dB “bonus” as they are found less annoying at the same loudness level.

Maglev – Comparison with conventional trains

Design Comparisons: Braking and overhead wire wear have caused problems for the Fastech 360 railed Shinkansen. Maglev would eliminate these issues. Magnet reliability at higher temperatures is a countervailing comparative disadvantage (see suspension types), but new alloys and manufacturing techniques have resulted in magnets that maintain their levitational force at higher temperatures.

Maglev – Comparison with conventional trains

There are no need for train whistles or horns, either.

Maglev – Shanghai Maglev Train

This Shanghai Maglev Train demonstration line, or Initial Operating Segment (IOS), has been in commercial operations since April 2004 and now operates 115 (up from 110 daily trips in 2010) daily trips that traverse the 30 km (19 mi) between the two stations in just 7 minutes, achieving a top speed of 431 km/h (268 mph), averaging 266 km/h (165 mph)

Maglev – Shanghai Maglev Train

Plans to extend the line to Shanghai South Railway Station and Hongqiao Airport on the western edge of Shanghai have been put on hold. After the Shanghai–Hangzhou Passenger Railway has become operational in late 2010, the maglev extension has become somewhat redundant and may be canceled.

Perception management – Training

The New York Times states, in theory these new standards for concussions are great for preventing any further brain damage and significantly reducing the risk of missing symptoms that can be onset in the next 24 hours, but with athletes now hiding possible concussions from athletic trainers and physicians, these standards may actually have a negative effect on concussion management.

Copy editing – Traits, skills, and training

Besides an excellent command of language, copy editors need broad general knowledge for spotting factual errors; good critical thinking skills in order to recognize inconsistencies or vagueness; interpersonal skills for dealing with writers, other editors and designers; attention to detail; and a sense of style. Also, they must establish priorities and balance a desire for perfection with the necessity to follow deadlines.

Copy editing – Traits, skills, and training

Many copy editors have a college degree, often in journalism, the language the text is written in, or communications. In the United States, copy editing is often taught as a college journalism course, though its name varies. The courses often include news design and pagination.

Copy editing – Traits, skills, and training

In the United States, The Dow Jones Newspaper Fund sponsors internships that include two weeks of training. Also, the American Press Institute, the Poynter Institute, the University of North Carolina at Chapel Hill, UC San Diego Extension and conferences of the American Copy Editors Society offer mid-career training for newspaper copy editors and news editors (news copy desk supervisors).

Copy editing – Traits, skills, and training

Most U.S. newspapers and publishers give copy-editing job candidates an editing test or a tryout. These vary widely and can include general items such as acronyms, current events, math, punctuation, and skills such as the use of Associated Press style, headline writing, infographics editing, and journalism ethics.

Copy editing – Traits, skills, and training

In both the U.S. and the U.K., there are no official bodies offering a single recognized qualification.

Copy editing – Traits, skills, and training

In the U.K., several companies provide a range of courses unofficially recognised within the industry. Training may be on the job or through publishing courses, privately run seminars, or correspondence courses of the Society for Editors and Proofreaders. The National Council for the Training of Journalists also has a qualification for subeditors.

Logic programming – Constraint logic programming

Constraint logic programming is an extension of normal Logic Programming that allows some predicates, declared as constraint predicates, to occur as literals in the body of clauses. These literals are not solved by goal-reduction using program clauses, but are added to a store of constraints, which is required to be consistent with some built-in semantics of the constraint predicates.

Logic programming – Constraint logic programming

Problem solving is achieved by reducing the initial problem to a satisfiable set of constraints. Constraint logic programming has been used to solve problems in such fields as civil engineering, mechanical engineering, digital circuit verification, automated timetabling, air traffic control, and finance. It is closely related to abductive logic programming.

PRINCE2 – Exams, accreditation and training

PRINCE2 certification requires passing the requisite examinations or assessment. The lower level Foundation exam is a one-hour, multiple choice exam which tests candidate’s knowledge of the method. The exam consists of 75 questions, 5 of which are trial questions which do not carry a mark. Of the remaining 70 questions which do carry a mark, the candidate needs to score 50% or more (i.e. 35 or more) to pass.

PRINCE2 – Exams, accreditation and training

The higher level Practitioner exam lasts for 2.5 hours, and is a more complex multiple choice exam which tests candidate’s ability to apply the method to a simple project scenario. The paper consists of 8 topics, with 10 questions per topic making a total of 80 marks. The pass mark is 55%, which is 44 marks or more. Passing the Foundation exam is a pre-requisite for sitting the Practitioner exam.

PRINCE2 – Exams, accreditation and training

Candidates who have passed the PRINCE2 Practitioner exam may call themselves a Registered PRINCE2 Practitioner for 5 years after which they must pass a Re-registration examination every 5 years to maintain their Registered Practitioner status. The Re-registration exam is a one hour exam with 3 topics each containing 10 questions. The pass mark is 55%, which means candidates must score 17 marks or more to pass.

PRINCE2 – Exams, accreditation and training

In 2012, the accreditation body, the APM Group, introduced a higher level qualification known as the PRINCE2 Professional qualification which is a 2.5 day residential assessment involving group exercises and activities. The assessment criteria involve more general capabilities such as team working, which is not a specific PRINCE2 capability. Passing the Practitioner exam is a pre-requisite for sitting the Professional assessment.

PRINCE2 – Exams, accreditation and training

Examinations can be sat by candidates who attend an accredited training course, or by those who purchase an accredited elearning course. Candidates who self-study may also purchase an exam via the APM Group’s web site and can then sit the exam at a public exam centre, or at a British Council office.

PRINCE2 – Exams, accreditation and training

The APM Group publishes a successful candidate register which can be searched on the web. The register records the details of candidates who have sat PRINCE2 examinations.

PRINCE2 – Exams, accreditation and training

Trainers must be re-accredited every 3 years and undergo a surveillance check (either in the form of a visit by an assessor to a training course or a telephone interview which assesses their professional knowledge and training capability) every 12 months.

PRINCE2 – Exams, accreditation and training

Qualified PRINCE2 Practitioners who go on to study for the APMP qualification are exempt from certain topics of the syllabus that are covered in the PRINCE2 Practitioner qualification.

Bachelor’s degree – Russia, Ukraine, Armenia

The specialist’s degree (Russian: ??????????), (Ukrainian: ????i??i??) was the first academic distinction in the Soviet Union, awarded to students upon completion of five-year studies at the university level

Software Engineering Institute – Education and training

SEI courses help bring state-of-the-art technologies and practices from research and development into widespread use. SEI courses are currently offered at the SEI’s locations in the United States and Europe. In addition, using licensed course materials, SEI Partners train thousands of individuals annually.

For More Information, Visit:

store.theartofservice.com/itil-2011-foundation-complete-certification-kit-fourth-edition-study-guide-ebook-and-online-course.html

store.theartofservice.com/itil-2011-foundation-complete-certification-kit-fourth-edition-study-guide-ebook-and-online-course.html

Take PROM To The Next Level

Download (PPT, 1.27MB)


store.theartofservice.com/itil-2011-foundation-complete-certification-kit-fourth-edition-study-guide-ebook-and-online-course.html

PROM

Real-time marketing Unrealized promise

The term “real-time marketing” has the potential weakness of self-limiting the underlying decisioning server capability to cross/up-selling despite the observation that this particular function is generally the most compelling aspect of the application class. Vendors therefore found themselves re-branding real-time marketing products to suggest a more holistic appreciation of enterprise interaction decision management.

Real-time marketing Unrealized promise

In some respects, these early real-time marketing customer implementations were ahead of their time despite acknowledged revenue realization within the early adopters.

Real-time marketing Unrealized promise

Hosted real-time marketing solutions are an obvious and increasingly prevalent means of provisioning organisational demand for this critical enterprise capability

Real-time marketing Unrealized promise

Gartner’s predictions for the Gartner Top 10 Technologies for 2011 suggest that whatever the nomenclature, real-time marketing will continue to evolve, crucially to embrace mobile platforms underpinned by an awareness of customer context, location and social networking (collective intelligence) implications.

Command-line interface Command prompt

A command prompt (or just prompt) is a sequence of (one or more) characters used in a command-line interface to indicate readiness to accept commands. Its intent is to literally prompt the user to take action. A prompt usually ends with one of the characters $, %, #, :, > and often includes other information, such as the path of the current working directory.

Command-line interface Command prompt

On many Unix system and derivative systems, it is common for the prompt to end in a $ or % character if the user is a normal user, but in a # character if the user is a superuser (“root” in Unix terminology).

Command-line interface Command prompt

On some systems, special tokens in the definition of the prompt can be used to cause external programs to be called by the command-line interpreter while displaying the prompt.

Command-line interface Command prompt

The default of older DOS systems, C> is obtained by just “prompt”, although on some systems this produces the newer C:\> style, unless used on floppy drives A: or B:; on those systems “prompt $N$G” can be used to override the automatic default and explicitly switch to the older style.

Command-line interface Command prompt

On many Unix systems, the $PS1 variable can be used, although other variables also may have an impact on the prompt (depending on what shell is being used). In the bash shell, a prompt of the form

Command-line interface Command prompt

could be set by issuing the command

Command-line interface Command prompt

In zsh the $RPROMPT variable controls an optional “prompt” on the right hand side of the display. It is not a real prompt in that the location of text entry does not change. It is used to display information on the same line as the prompt, but right justified.

Command-line interface Command prompt

In RISC OS, the command prompt is a ‘*’ symbol, and thus (OS)CLI commands are often referred to as “star commands”. It is also possible to access the same commands from other command lines (such as the BBC BASIC command line), by preceding the command with a ‘*’.

Apache Cassandra Prominent users

Apixio uses Cassandra to store its Patient Object Model and extracted features about patients and patient populations

Apache Cassandra Prominent users

AppScale uses Cassandra as a back-end for Google App Engine applications

Apache Cassandra Prominent users

Cisco’s WebEx uses Cassandra to store user feed and activity in near real time.

Apache Cassandra Prominent users

The CERN ATLAS experiment uses Cassandra to archive its online DAQ system’s monitoring information

Apache Cassandra Prominent users

Cloudkick uses Cassandra to store the server metrics of their users.

Apache Cassandra Prominent users

Constant Contact uses Cassandra in their Social Media marketing application.

Apache Cassandra Prominent users

Digg, a large social news website, announced on Sep 9th, 2009 that it is rolling out its use of Cassandra and confirmed this on March 8, 2010. TechCrunch has since linked Cassandra to Digg v4 reliability criticisms and recent company struggles. Lead engineers at Digg later rebuked these criticisms as red herring and blamed a lack of load testing.

Apache Cassandra Prominent users

Facebook used Cassandra to power Inbox Search, with over 200 nodes deployed. This was abandoned in late 2010 when they built Facebook Messaging platform on HBase.

Apache Cassandra Prominent users

IBM has done research in building a scalable email system based on Cassandra.

Apache Cassandra Prominent users

InWorldz has researched and developed a scalable high-performance storage system for user inventory items Cassandra.

Apache Cassandra Prominent users

Netflix uses Cassandra as their back-end database for their streaming services

Apache Cassandra Prominent users

Formspring uses Cassandra to count responses, as well as store Social Graph data (followers, following, blockers, blocking) for 26 Million accounts with 10 million responses a day

Apache Cassandra Prominent users

Mahalo.com uses Cassandra to record user activity logs and topics for their Q&A website

Apache Cassandra Prominent users

Ooyala Built a scalable, flexible, real-time analytics engine using Cassandra

Apache Cassandra Prominent users

At Openwave, Cassandra acts as a distributed database and serves as a distributed storage mechanism for Openwave’s next generation messaging platform

Apache Cassandra Prominent users

OpenX is running over 130 nodes on Cassandra for their OpenX Enterprise product to store and replicate advertisements and targeting data for ad delivery

Apache Cassandra Prominent users

Plaxo has “reviewed 3 billion contacts in [their] database, compared them with publicly available data sources, and identified approximately 600 million unique people with contact info.”

Apache Cassandra Prominent users

PostRank uses Cassandra as their backend database

Apache Cassandra Prominent users

Rackspace is known to use Cassandra internally.

Apache Cassandra Prominent users

Reddit switched to Cassandra from memcacheDB on March 12, 2010 and experienced some problems in May due to insufficient nodes in their cluster.

Apache Cassandra Prominent users

RockYou uses Cassandra to record every single click for 50 million Monthly Active Users in real-time for their online games

Apache Cassandra Prominent users

SoundCloud uses Cassandra to store the dashboard of their users

Apache Cassandra Prominent users

Talentica Software uses Cassandra as a back-end for Analytics Application with Cassandra cluster of 30 nodes and inserting around 200GB data on daily basis.

Apache Cassandra Prominent users

Twitter announced it is planning to use Cassandra because it can be run on large server clusters and is capable of taking in very large amounts of data at a time. Twitter continues to use it but not for Tweets themselves.

Apache Cassandra Prominent users

Urban Airship uses Cassandra with the mobile service hosting for over 160 million application installs across 80 million unique devices

Apache Cassandra Prominent users

@WalmartLabs (previously Kosmix) uses Cassandra with SSD

Apache Cassandra Prominent users

Yakaz uses Cassandra on a five-node cluster to store millions of images as well as its social data.

Apache Cassandra Prominent users

ZangBeZang uses Cassandra as the datastore for its carrier grade recommendation and marketing platform.

Apache Cassandra Prominent users

Zoho uses Cassandra for generating the inbox preview in their Zoho#Zoho_Mail service

Apache Cassandra Prominent users

Ironically, Facebook moved off its pre-Apache Cassandra deployment in late 2010 when they replaced Inbox Search with the Facebook Messaging platform. In 2012, Facebook began using Apache Cassandra in its Instagram unit.

Apache Cassandra Prominent users

Cassandra is the most popular wide column store.

Advertising Sales promotions

Sales promotions are another way to advertise. Sales promotions are double purposed because they are used to gather information about what type of customers you draw in and where they are, and to jumpstart sales. Sales promotions include things like contests and games, sweepstakes, product giveaways, samples coupons, loyalty programs, and discounts. The ultimate goal of sales promotions is to stimulate potential customers to action.

Net Promoter

Net Promoter is a management tool that can be used to gauge the loyalty of a firm’s customer relationships. It serves as an alternative to traditional customer satisfaction research.

Net Promoter Overview

“Net Promoter Score” is a customer loyalty metric developed by (and a registered trademark of) Fred Reichheld, Bain & Company, and Satmetrix. It was introduced by Reichheld in his 2003 Harvard Business Review article “One Number You Need to Grow”. NPS can be as low as ?100 (everybody is a detractor) or as high as +100 (everybody is a promoter). An NPS that is positive (i.e., higher than zero) is felt to be good, and an NPS of +50 is excellent.

Net Promoter Overview

Net Promoter Score (NPS) measures the loyalty that exists between a Provider and a consumer. The provider can be a Company, employer or any other entity. The provider is the entity that is asking the questions on the NPS survey. The Consumer is the customer, employee, or respondent to an NPS survey.

Net Promoter Overview

NPS is based on a direct question: How likely are you to recommend our company/product/service to your friends and colleagues? The scoring for this answer is most often based on a 0 to 10 scale

Net Promoter Overview

In the most advanced systems promoters are given the opportunity to promote immediately using Social Media connectors.

Net Promoter Overview

The primary purpose of the NPS methodology is to evaluate customer loyalty to a brand or company, not to evaluate their satisfaction with a particular product or transaction

Net Promoter Overview

When a provider factors in the customer acquisition cost to the overall profitability of a consumer account, the longer a consumer stays active and resists defection the more profitable the relationship can be for both parties. Measuring the value of the relationship after costs gives the provider a clear view of how to attract and retain the most profitable consumers and how to most effectively invest in and develop those relationships.

Net Promoter Overview

Net Promoter methodology also includes a process to close the loop. Closing the loop is a process by which the provider actively intervenes to change a negative perception and convert a detractor into a promoter. The Net Promoter survey will identify a detractor and should automatically alert the provider to contact the consumer and manage the followup and actions from that point.

Net Promoter Overview

Discussed at length in The Ultimate Question: Driving Good Profits and True Growth by Fred Reichheld, and “Answering the Ultimate Question” by Satmetrix Executives Richard Owen and Laura Brooks, the Net Promoter approach has been adopted by several companies, including E.ON, Philips, GE, Apple Retail, American Express, and Intuit

Net Promoter Overview

A customer is able to leave comments in the surveys sent to them. This is what allows a company to use the VOC (Voice of Customer) to ensure that company is meeting the expectations.

Net Promoter Overview

The same methodology can be used to measure and evaluate employee satisfaction with their employer. Tracking and managing the internal score is a way that companies can keep a focus on their culture. This measures more than just an employee’s satisfaction with common KPI points in the company. It expands to include the importance of various factors rather than just focusing on list of workplace issues to improve.

Net Promoter Criticism of NPS

Research by Keiningham, Cooil, Andreassen and Aksoy disputes that the Net Promoter metric is the best predictor of company growth

Net Promoter Criticism of NPS

Environmental factors may exert an influence on customers’ response to the “recommend” question—making comparisons across business units or industries difficult in certain cases

Net Promoter Criticism of NPS

Daniel Schneider, Jon Krosnick, et al. found that out of four scales tested, the 11-point scale advocated by Reichheld had the lowest predictive validity of the scales tested.

Net Promoter Criticism of NPS

Others have taken issue with the calculation methodology, claiming that by collapsing an 11-point scale to three components (e.g., Promoters, Passives, Detractors), significant information is lost and statistical variability of the result increases. The validity of NPS scale cut-off points across industries and cultures has also been questioned.

Net Promoter Criticism of NPS

Proponents of the Net Promoter approach point out that the statistical analyses presented prove only that the “recommend” question is similar in predictive power to other metrics, but fail to address the practical benefits of the approach, which are at the heart of the argument Reichheld put forth

General Electric Promotion and training

Thousands of people from every level of the company are trained at the Jack F. Welch Leadership Center.

Business marketing – Promotion

Promotion techniques rely heavily on marketing communications strategies (see below).

Napster – Promotional power

According to Richard Menta of MP3 Newswire, the effect of Napster in this instance was isolated from other elements that could be credited for driving sales, and the album’s unexpected success suggested that Napster was a good promotional tool for music.

Napster – Promotional power

The band members were avid supporters of Napster, promoting it at their shows, playing a Napster show around the time of the Congressional hearings, and attending the hearings themselves

Napster – Promotional power

Although some underground musicians and independent labels have expressed support for Napster and the p2p model it popularized, others have criticized the unregulated and extra-legal nature of these networks, and some seek to implement models of Internet promotion in which they can control the distribution of their own music, such as providing free tracks for download or streaming from their official websites, or co-operating with pay services such as Insound, Rhapsody and Apple’s iTunes Store.

Brand ambassador – Promotional model

Booth babes as promotional models at trade show exhibits and conventions have attained much criticism

Hashtag – Promotion

The hashtag phenomenon has also been harvested for advertisement, promotion and contingency coordination. Most larger organizations will only focus on one or a small number of hashtags. However some individuals and organizations use a large number of hashtags to emphasise the broad range of concepts in which they are interested. The decision on whether to specialise in particular hashtags or promote a range depends on the marketing strategy of those involved.

Hashtag – Event promotion

Organized real-world events have also made use of hashtags and ad hoc lists for discussion and promotion among participants. Hashtags are used as beacons by event participants in order to find each other on both Twitter and, in many cases, in real life during events.

Hashtag – Event promotion

Companies and advocacy organizations have taken advantage of hashtag-based discussions for promotion of their products, services or campaigns.

Hashtag – Event promotion

Political protests and campaigns in the early 2010s, such as #OccupyWallStreet and #LibyaFeb17, have been organized around hashtags or have made extensive usage of hashtags for the promotion of discussion.

Social Media and television – Promotion

Using the hashtag #TrumpRoast at the bottom of the screen, Twitter called it “the single deepest integration of a Twitter hashtag on air-ever.” The promotion worked, as it generated the channel’s most-watched Tuesday in history; the hashtag #trumproast was used over 27,000 times on Twitter during the show’s initial broadcast.

MariaDB – Prominent users

Red Hat Enterprise Linux (from RHEL 7)

MariaDB – Prominent users

Wikimedia Foundation

Microsoft Open Specification Promise

The Microsoft Open Specification Promise (or OSP), is a promise by Microsoft, published in September 2006, to not assert legal rights over certain Microsoft patents on implementations of an included list of technologies.

Microsoft Open Specification Promise

The OSP is a Covenant Not to Sue and an example of Fair, Reasonable and Non Discriminatory terms for the patents in question.

Microsoft Open Specification Promise

The OSP licensing covers any use and any implementations of an appended list of covered specifications. It is limited for implementations to the extent that they conform to those specifications. This allows for conformance to be partial and does not require the conformance to be perfect.

Microsoft Open Specification Promise – Compatibility with open source licensing

The OSP is effectively a patent sublicense to everyone limited to use with certain formats and required technology to implement OSP licensed formats.

Microsoft Open Specification Promise – Compatibility with open source licensing

Open source licenses, in general, deal with licensing of copyrights of contributors to the software. GPLv2 is an example of such copyright licensing. GPLv2 does not grant you 3rd party (patent) rights.

Microsoft Open Specification Promise – Compatibility with open source licensing

The open source software (OSS) licensing deals with copyrights on the source code created by the contributors. Source code based on an OSP licenced format specification has its own copyrights and is therefore sublicensable by the contributors themselves. The OSP is only about patent rights. It grants additional rights to implementers and users to the OSS licensing.

Microsoft Open Specification Promise – Compatibility with open source licensing

Because Microsoft through the OSP grants patent rights to anybody that implements or uses technology required for OOXML there is no need for sublicensing of patent rights through the GPL. OSS users and implementers get the same rights automatically.

Microsoft Open Specification Promise – Compatibility with open source licensing

An OSS implementer that uses GPL software which implements an OSP licensed format, is granted certain copyrights on the software through his GPL license, which are granted by the prior software contributors. In addition to that he is allowed to use Microsoft patents for required format related technology through the OSP license.

Microsoft Open Specification Promise – Compatibility with open source licensing

Several standards and OSS licensing experts have expressed support of the OSP in 2006. An article in Cover Pages quotes Lawrence Rosen, an attorney and lecturer at Stanford Law School, as saying,

Microsoft Open Specification Promise – Compatibility with open source licensing

“I’m pleased that this OSP is compatible with free and open source licenses.”

Microsoft Open Specification Promise – Compatibility with open source licensing

In 2006 Mark Webbink; a lawyer and member of the board of the Software Freedom Law Center, and former employee of Linux vendor Red Hat; said,

Microsoft Open Specification Promise – Compatibility with open source licensing

“Red Hat believes that the text of the OSP gives sufficient flexibility to implement the listed specifications in software licensed under free and open source licenses. We commend Microsoft’s efforts to reach out to representatives from the open source community and solicit their feedback on this text, and Microsoft’s willingness to make modifications in response to our comments.”

Microsoft Open Specification Promise – Compatibility with open source licensing

Standards lawyer Andy Updegrove said in 2006 the Open Specification Promise was

Microsoft Open Specification Promise – Compatibility with open source licensing

“what I consider to be a highly desirable tool for facilitating the implementation of open standards, in particular where those standards are of interest to the open source community.”

Microsoft Open Specification Promise – Scope limitation

The Software Freedom Law Center, which provides services to protect and advance free software and open source software, has warned of problems with the Open Specification Promise for use in free software / open source software projects. In a published analysis of the promise it states that

Microsoft Open Specification Promise – Scope limitation

“…it permits implementation under free software licenses so long as the resulting code isn’t used freely.”

Microsoft Open Specification Promise – Scope limitation

The limitations of a one-sided patent promise only applying to covered specifications is also present in the IBM Interoperability Specifications Pledge (ISP) and Sun Microsystems’ OpenDocument Patent Statement.

Microsoft Open Specification Promise – Scope limitation

This means, for example, that use of the required Sun patented StarOffice-related technology for OpenDocument should be protected by the Sun Covenant, but reuse of the code with the patented technology for non-OpenDocument implementations is no longer protected by the related Sun covenant.

Microsoft Open Specification Promise – Scope limitation

The OSP similarly can be used to freely implement any of the covered specifications in OSS but its scope is limited to the covered specifications and cannot be used to transfer Microsoft patent rights to other implementations of non covered specifications for instance by using the technology in code that has a patent transferring software license.

Microsoft Open Specification Promise – Scope limitation

“The OSP cannot be relied upon by GPL developers for their implementations not because its provisions conflict with GPL, but because it does not provide the freedom that the GPL requires.”

Microsoft Open Specification Promise – Scope limitation

The SFLC specifically point out:

Microsoft Open Specification Promise – Scope limitation

new versions of listed specifications could be issued at any time by Microsoft, and be excluded from the OSP.

Microsoft Open Specification Promise – Scope limitation

any code resulting from an implementation of one of the covered specifications could not safely be used outside the very limited field of use defined by Microsoft in the OSP.

Microsoft Open Specification Promise – Scope limitation

“we can’t give anyone a legal opinion about how our language relates to the GPL or other OSS licenses”

Microsoft Open Specification Promise – Scope limitation

In another, it specifically only mentions the “developers, distributors, and users of Covered Implementations”, so excluding downstream developers, distributors, and users of code later derived from these “Covered Implementations” and it specifically does not mention which version of the GPL is addressed, leading some commentators to conclude that the current GPL 3 may be excluded.

Microsoft Open Specification Promise – Scope limitation

Q: I am a developer/distributor/user of software that is licensed under the GPL, does the Open Specification Promise apply to me?

Microsoft Open Specification Promise – Scope limitation

A: Absolutely, yes

Microsoft Open Specification Promise – Web

Web Slice Format Specification introduced with Internet Explorer 8

Microsoft Open Specification Promise – Web

XML Search Suggestions Format Specification

Microsoft Open Specification Promise – Virtualization Specifications

Virtual Hard Disk (VHD) Image Format Specification

Microsoft Open Specification Promise – Virtualization Specifications

Microsoft Application Virtualization File Format Specification v1

Microsoft Open Specification Promise – Virtualization Specifications

Hyper-V Functional Specification

Microsoft Open Specification Promise – Security

RFC 4408 – Sender Policy Framework: Authorizing Use of Domains in “Mail From”

Microsoft Open Specification Promise – Security

RFC 4407 – Purported Responsible Address in E-Mail Messages

Microsoft Open Specification Promise – Security

RFC 4405 – SMTP Service Extension for Indicating the Responsible Submitter of an E-Mail Message

Microsoft Open Specification Promise – Security

U-Prove Cryptographic Specification V1.0

Microsoft Open Specification Promise – Security

U-Prove Technology Integration into the Identity Metasystem V1.0

Microsoft Open Specification Promise – XML file formats

OpenDocument Format for Office Applications v1.0 OASIS

Microsoft Open Specification Promise – Structure specifications

[MS-DOC]: Word Binary File Format (.doc) Structure Specification

Microsoft Open Specification Promise – Structure specifications

[MS-PPT]: PowerPoint Binary File Format (.ppt) Structure Specification

Microsoft Open Specification Promise – Structure specifications

[MS-XLS]: Excel Binary File Format (.xls) Structure Specification

Microsoft Open Specification Promise – Structure specifications

[MS-XLSB]: Excel Binary File Format (.xlsb) Structure Specification

Microsoft Open Specification Promise – Structure specifications

[MS-ODRAW]: Office Drawing Binary File Format Structure Specification

Microsoft Open Specification Promise – Structure specifications

[MS-CTDOC]: Word Custom Toolbar Binary File Format Structure Specification

Microsoft Open Specification Promise – Structure specifications

[MS-CTXLS]: Excel Custom Toolbar Binary File Format Structure Specification

Microsoft Open Specification Promise – Structure specifications

[MS-OFORMS]: Office Forms Binary File Format Structure Specification

Microsoft Open Specification Promise – Structure specifications

[MS-OGRAPH]: Office Graph Binary File Format Structure Specification

Microsoft Open Specification Promise – Structure specifications

[MS-OSHARED]: Office Common Data Types and Objects Structure Specification

Microsoft Open Specification Promise – Structure specifications

[MS-OVBA]: Office VBA File Format Structure Specification

Microsoft Open Specification Promise – Structure specifications

[MS-OFFCRYPTO]: Office Document Cryptography Structure Specification

Microsoft Open Specification Promise – Windows compound formats

[MS-CFB] Windows Compound Binary File Format Specification

Microsoft Open Specification Promise – Microsoft computer languages

[MS-WPFXV]: WPF XAML Vocabulary Specification 2006 (Draft v0.1)

Microsoft Open Specification Promise – Microsoft computer languages

[MS-SLXV]: Silverlight XAML Vocabulary Specification 2008 (Draft v0.9)

Microsoft Open Specification Promise – Windows Rally Technologies

Windows Connect Now– UFD and Windows Vista

Microsoft Open Specification Promise – Published protocols

Microsoft claims the Open Specification Promise applies to a long list of communication and internet protocols including the following. Most of these are in fact open standards which Microsoft may have implemented in one or more pieces of software rather than intellectual property belonging to Microsoft:

Microsoft Open Specification Promise – Published protocols

[MC-BUP]: Background Intelligent Transfer Service (BITS) Upload Protocol Specification

Microsoft Open Specification Promise – Published protocols

[MC-CCFG]: Server Cluster: Configuration (ClusCfg) Protocol Specification

Microsoft Open Specification Promise – Published protocols

[MC-COMQC]: Component Object Model Plus (COM+) Queued Components Protocol Specification

Microsoft Open Specification Promise – Published protocols

[MC-SMP]: Session Multiplex Protocol Specification

Microsoft Open Specification Promise – Published protocols

[MC-SQLR]: SQL Server Resolution Protocol Specification

Microsoft Open Specification Promise – Published protocols

1394 Serial Bus Protocol 2

Microsoft Open Specification Promise – Published protocols

IBM NetBIOS Extended User Interface (NetBEUI) v 3.0

Microsoft Open Specification Promise – Published protocols

Infrared Data Association (IrDA) Published Standards

Microsoft Open Specification Promise – Published protocols

RFC 1112, RFC 2236, and RFC 3376 – Internet Group Management Protocol (IGMP) v1, v2, and v3

Microsoft Open Specification Promise – Published protocols

RFC 1256 – ICMP Router Discovery Messages

Microsoft Open Specification Promise – Published protocols

RFC 1334 – Password Authentication Protocol (PAP)

Microsoft Open Specification Promise – Published protocols

RFC 1483, RFC 1755, and RFC 2225 – Internet Protocol over Asynchronous Transfer Mode (IP over ATM)

Microsoft Open Specification Promise – Published protocols

RFC 1510 and RFC 1964 – Kerberos Network Authentication Service (v5)

Microsoft Open Specification Promise – Published protocols

RFC 1994 – MD5 Challenge Handshake Authentication Protocol (MD5-CHAP)

Microsoft Open Specification Promise – Published protocols

RFC 2205, RFC 2209, and RFC 2210 – Resource Reservation Setup (RSVP)

Microsoft Open Specification Promise – Published protocols

RFC 2222 – Simple Authentication and Security Layer (SASL)

Microsoft Open Specification Promise – Published protocols

Sun Microsystems Remote Procedure Call (SunRPC)

Microsoft Open Specification Promise – Published protocols

Universal Serial Bus (USB) Revision 2.0

Digital rights management – Digital content as promotion for traditional products

Many artists are using the Internet to give away music to create awareness and liking to a new upcoming album

Free software movement – Should principles be compromised?

Some, such as Eric Raymond, criticise the speed at which the free-software movement is progressing, suggesting that temporary compromises should be made for long-term gains. Raymond argues that this could raise awareness of the software and thus increase the free-software movement’s influence on relevant standards and legislation.

Free software movement – Should principles be compromised?

Others, such as Richard Stallman, see the current level of compromise to be the bigger worry.

Product placement – Product prominence

People were more likely to think that repeated prominent product placements was distracting and that they made the movie feel less real

Product placement – Self promotion

20th Century Fox, a subsidiary of News Corporation, has promoted its parent company’s own Sky News channel through including it as a plot device when characters are viewing news broadcasts of breaking events. The newscaster or reporter in the scene will usually state that the audience is viewing Sky News, and reports from other channels are not shown. One notable example is the film Independence Day (1996).

Product placement – Self promotion

Columbia Pictures uses or mentions products of parent company Sony products like VAIO computers or BRAVIA televisions in their movies; when it was owned by The Coca-Cola Company, Coca-Cola products were often featured.

Punched tape – Data transfer for ROM and EPROM programming

Encoding formats commonly used were primarily driven by those formats that EPROM programming devices supported and included various ASCII hex variants as well as a number of computer-proprietary formats.

Punched tape – Data transfer for ROM and EPROM programming

A much more primitive as well as a much longer high-level encoding scheme was also used – BNPF (Begin-Negative-Positive-Finish)

Promotional merchandise – History

The first known promotional products in the United States are commemorative buttons dating back to the election of George Washington in 1789. During the early 19th century, there were some advertising calendars, rulers, and wooden specialties, but there wasn’t an organized industry for the creation and distribution of promotional items until later in the 19th century.

Promotional merchandise – History

Jasper Meeks, a printer in Coshocton, Ohio, is considered by many to be the originator of the industry when he convinced a local shoe store to supply book bags imprinted with the store name to local schools. Henry Beach, another Coshocton printer and a competitor of Meeks, picked up on the idea, and soon the two men were selling and printing bags for marbles, buggy whips, card cases, fans, calendars, cloth caps, aprons, and even hats for horses.

Promotional merchandise – History

In 1904, 12 manufacturers of promotional items got together to found the first trade association for the industry. That organization is now known as the Promotional Products Association International or PPAI, which currently has more than 7,500 global members. PPAI represents the promotional products industry of more than 22,000 distributors and approximately 4,800 manufacturers.

Promotional merchandise – History

In the early years the range of products available were limited; however, in the early 1980s demand grew from distributors for a generic promotional product catalogue they could brand as their own and then leave with their corporate customers.

Promotional merchandise – History

In later years these catalogues could be over-branded to reflect a distributor’s corporate image and distributors could then give them to their end user customers as their own. In the early years promotional merchandise catalogues were very much sales tools and customers would buy the products offered on the pages.

Promotional merchandise – History

In the nineties there was also the creation of ‘Catalogue Groups’ who offered a unique catalogue to a limited geographical group of promotional merchandise distributor companies

Promotional merchandise – History

In the early 21st century the role of a promotional merchandise catalogue started to change, as it could no longer fully represent the vast range of products in the market place

Promotional merchandise – History

This service is purely for vetted trade promotional merchandise distributor companies & is not available to corporate end user companies.

Promotional merchandise – History

By 2008 almost every distributor had a website demonstrating a range of available promotional products. Very few offer the ability to order products online mainly due to the complexities surrounding the processes to brand the promotional products required.

Promotional merchandise – Sourcing

Promotional merchandise is, in the main, purchased by corporate companies in USA, Canada, the UK & Ireland through promotional merchandise distributor companies. In the United States and Canada, these distributors are called “Promotional Consultants” or “promotional product distributors.”

Promotional merchandise – Sourcing

Distributors have the ability to source & supply tens of thousands of products from across the globe. Even with the advent and growth of the Internet this supply chain has not changed, for a few reasons:

Promotional merchandise – Sourcing

Promotional products by definition are custom printed with a logo, company name or message usually in specific PMS colors

Promotional merchandise – Sourcing

Many distributors operate on the internet and/or in person. Many suppliers wish not to invest in the staffing to service end-users’ needs, which is the purpose of merchandise distributor companies.

Promotional merchandise – Products and uses

Promotional merchandise is used globally to promote brands, products, and corporate identity. They are also used as giveaways at events, such as exhibitions and product launches. Promotional products can be used for non-profit organizations to promote their cause, as well as promote certain events that they hold, such as walks or any other event that raises money for a cause.

Promotional merchandise – Products and uses

Almost anything can be branded with a company’s name or logo and used for promotion. Common items include t-shirts, caps, keychains, posters, bumper stickers, pens, mugs, or mouse pads. The largest product category for promotional products is wearable items, which make up more than 30% of the total. Eco-friendly promotional products such as those created from recycled materials and bamboo, a renewable resource, are also experiencing a significant surge in popularity.

Promotional merchandise – Products and uses

Companies that provide expensive gifts for celebrity attendees often ask that the celebrities allow a testimonial|photo to be taken of them with the gift item, which can be used by the company for promotional purposes

Promotional merchandise – Products and uses

Other objectives that marketers use promotional items to facilitate include employee relations and events, tradeshow traffic-building, public relations, new customer generation, dealer and distributor programs, new product introductions, employee service awards, not-for-profit programs, internal incentive programs, safety education, customer referrals, and marketing research.[http://www.ppa.org/NR/rdonlyres/8C233CED-39BD-4F9D-A708-E97C60A9732C/0/2008SalesVolume.pdf 2008 Distributor Sales Report]

Promotional merchandise – Products and uses

Promotional items are also used in politics to promote candidates and causes. Promotional items as a tool for non-commercial organizations, such as schools and charitable organization|charities are often used as a part of fund raising and awareness-raising campaigns. A prominent example was the livestrong wristband, used to promote cancer awareness and raise funds to support cancer survivorship programs and research.

Promotional merchandise – Products and uses

Collecting certain types of promotional items is also a popular hobby. In particular, branded antique point of sale items that convey a sense of nostalgia are popular with collectors and are a substantial component to the antique industry.[http://sourcingok.com/archives/599 The Importance of Branded Point of Sale Items]

Promotional merchandise – Products and uses

The giving of corporate gifts vary across international borders and cultures, with the type of product given often varying from country to country.

Promotional merchandise – Products and uses

In addition to this the promotional merchandise distributors also provide full support in processing orders, artwork, proofing, progress chasing delivery of promotional products from multiple manufacturing sources.

Promotional merchandise – Trade associations

In the UK, the industry has two main trade bodies, Promota (Promotional Merchandise Trade Association) founded in 1958, and the BPMA (British Promotional Merchandise Association) established in 1965. These trade associations represent the industry and provide services to both manufacturers distributors of promotional merchandise.

Promotional merchandise – Trade associations

In the United States, PPAI (the Promotional Products Association International) is the not for profit association, offering the industry’s largest tradeshow (The PPAI Expo), as well as training, online member resources, and legal advocacy. Another organization, The Advertising Specialty Institute, promotes itself as the largest media and marketing organization serving the advertising specialty industry.

Promotional merchandise – Top companies in the United States

According to the Advertising Specialty Institute’s Counselor Magazine Awards, 2010’s top 40 promotional product distributors are as follows:

Promotional merchandise – UK market statistics

In July 2009 published research demonstrated that the top 10 promotional merchandise products were Promotional item|promotional pens, bags, clothing, plastic items, USB memory sticks, mugs, leather items, polyurethane conference folders, and umbrellas

Promotional merchandise – Top 10 Promotional Products Stores

TopTenREVIEWS, a review aggregator has published a list of 2013 Best Promotional Product Stores. Each store is evaluated on basis of how they rank individually and collectively for following features – Graphic Design Services, Item Selection, Website Features, Shipping Services, Help & Customer Support.

Promotional merchandise – Top 10 Promotional Products Stores

2013 Rank Promotional Products Store Rating

History of software engineering – 1990 to 1999: Prominence of the Internet

The rise of the Internet led to very rapid growth in the demand for international information display/e-mail systems on the World Wide Web. Programmers were required to handle illustrations, maps, photographs, and other images, plus simple animation, at a rate never before seen, with few well-known methods to optimize image display/storage (such as the use of thumbnail images).

History of software engineering – 1990 to 1999: Prominence of the Internet

The growth of browser usage, running on the HTML language, changed the way in which information-display and retrieval was organized

History of software engineering – Prominent Figures in the History of Software Engineering

Charles Bachman (born 1924) is particularly known for his work in the area of databases.

History of software engineering – Prominent Figures in the History of Software Engineering

David Parnas (born 1941) developed the concept of information hiding in modular programming.

History of software engineering – Prominent Figures in the History of Software Engineering

Michael A. Jackson (born 1936) software engineering methodologist responsible for JSP method of program design; JSD method of system development (with John Cameron); and Problem Frames method for analysing and structuring software development problems.

GLONASS – Promoting commercial use

To improve the situation, the Russian government has been actively promoting GLONASS for civilian use.

GLONASS – Promoting commercial use

To improve development of the user segment, on August 11, 2010, Sergei Ivanov announced a plan to introduce a 25% import duty on all GPS-capable devices, including mobile phones, unless they are compatible with GLONASS. As well, the government is planning to force all car manufacturers in Russia to make cars with GLONASS starting from 2011. This will affect all car makers, including foreign brands like Ford and Toyota, which have car assembly facilities in Russia.

GLONASS – Promoting commercial use

GPS and phone baseband chips from major vendors ST-Ericsson, Broadcom and Qualcomm all support GLONASS in combination with GPS.

GLONASS – Promoting commercial use

In April 2011, Sweden’s Swepos, a national network of satellite reference stations which provides data for real-time positioning with meter accuracy, became the first known foreign company to use GLONASS.

GLONASS – Promoting commercial use

Smartphones and Tablets also saw implementation of GLONASS support in 2011 with devices released that year from Xiaomi Tech Company (Xiaomi Phone 2), Sony Ericsson, Samsung (the Google Nexus 10 in late 2012), Asus, Apple (iPhone 4S and iPad Mini in late 2012) and HTC adding support for the system allowing increased accuracy and lock on speed in difficult conditions.

Daily Kos – Prominent contributors

Numerous political figures use Daily Kos to publish frequent or occasional content, including consultants, candidates, and sitting members of Congress. Prominent posters include:

Global marketing – Promotion

After product research, development and creation, promotion (specifically advertising) is generally the largest line item in a global company’s marketing budget

Global marketing – Promotion

Effective global advertising techniques do exist

Music video – 1960–1973: Promotional clips and others

In the late 1950s the Scopitone, a visual jukebox, was invented in France and short films were produced by many French artists, such as Serge Gainsbourg, Françoise Hardy, Jacques Brel, and Jacques Dutronc to accompany their songs

Music video – 1960–1973: Promotional clips and others

The colour promotional clips for “Strawberry Fields Forever” and “Penny Lane”, made in early 1967 and directed by Peter Goldman took the promotional film format to a new level

Music video – 1960–1973: Promotional clips and others

The promo film to Call Me Lightning (1968) tells a story of how drummer Keith Moon came to join the group: The other three band members are having tea inside what looks like an abandoned hangar when suddenly a “bleeding box” arrives, out of which jumps a fast-running, timelapse, Moon that the other members subsequently try to get a hold of in a sped-up slapstick chasing sequence to wind him down.

Music video – 1960–1973: Promotional clips and others

The group also filmed a colour promo clip for the song “2000 Light Years From Home” (from their album Their Satanic Majesties Request) directed by Michael Lindsay-Hogg

Music video – 1960–1973: Promotional clips and others

Rock directed and edited four clips to promote four consecutive David Bowie singles—”John, I’m Only Dancing” (May 1972), “The Jean Genie” (Nov

Music video – 1960–1973: Promotional clips and others

Promotional videos of country music songs, however, continued to be produced.

Applied behavior analysis – Prompting

The goal of teaching using prompts would be to fade prompts towards independence, so that no prompts are needed for the individual to perform the desired behavior.

Applied behavior analysis – Prompting

Vocal prompts: Utilizing a vocalization to indicate the desired response.

Applied behavior analysis – Prompting

Visual prompts: A visual cue or picture.

Applied behavior analysis – Prompting

Gestural prompts: Utilizing a physical gesture to indicate the desired response.

Applied behavior analysis – Prompting

Positional prompt: The target item is placed closer to the individual.

Applied behavior analysis – Prompting

Modeling: Modeling the desired response for the student. This type of prompt is best suited for individuals who learn through imitation and can attend to a model.

Applied behavior analysis – Prompting

Physical prompts: Physically manipulating the individual to produce the desired response. There are many degrees of physical prompts. The most intrusive being hand-over-hand, and the least intrusive being a slight tap to initiate movement.

Applied behavior analysis – Prompting

This is not an exhaustive list of all possible prompts. When using prompts to systematically teach a skill, not all prompts need to be used in the hierarchy; prompts are chosen based on which ones are most effective for a particular individual.

Leapfrogging – Promotion by international initiatives

Japan’s Low-Carbon Society 2050 Initiative has the objective to cooperate with and offer support to Asian developing countries to leapfrog towards a low-carbon energy future.

Promotion (marketing)

Fundamentally, however there are three basic objectives of promotion

Promotion (marketing)

To present information to consumers as well as others.

Promotion (marketing)

To differentiate a product.

Promotion (marketing)

There are different ways to promote a product in different areas of media. Promoters use internet advertisement, special events, endorsements, and newspapers to advertise their product. Many times with the purchase of a product there is an incentive like discounts, free items, or a contest. This is to increase the sales of a given product.

Promotion (marketing)

The term “promotion” is usually an “in” expression used internally by the marketing company, but not normally to the public or the market – phrases like “special offer” are more common. An example of a fully integrated, long-term, large-scale promotion are My Coke Rewards and Pepsi Stuff. The UK version of My Coke Rewards is Coke Zone.

Promotion (marketing) – Notes

Rajagopal. (2007) Marketing Dynamics: Theory and Practice. New Delhi, India: New Age International. Retrieved April 5, 2010, from NJIT EBook Library: www.njit.eblib.com.libdb.njit.edu:8888/patron/FullRecord.aspx?p=437711

Advertising Standards Authority (United Kingdom) – Sales promotions

The Institute of Sales Promotion (ISP), working to the same Code as the ASA does, can refer complaints to the ASA when it believes that there has been a breach of the rules on sales promotions rules. There has been no clear definition of what a sales promotion is for the purpose of the Code, but examples include:

Advertising Standards Authority (United Kingdom) – Sales promotions

Discounted purchase offers

Advertising Standards Authority (United Kingdom) – Sales promotions

Loyalty reward schemes, such as Air Miles

Advertising Standards Authority (United Kingdom) – Sales promotions

Not all offers that give the consumer something free with a particular purchase may be considered sales promotion. For example, a mobile phone deal that offers a free Bluetooth headset may be considered as part of a package deal rather than a sales promotion.

Community psychology – Prevention and health promotion

Community psychology emphasizes principles and strategies of preventing social, emotional and behavioral problems and wellness and health promotion at the individual and community levels, borrowed from Public health and Preventive medicine, rather than a passive, “waiting-mode,” treatment-based medical model

Fantastic Four: Rise of the Silver Surfer – Promotion

The teaser trailer was initially exclusively attached to Night at the Museum

Fantastic Four: Rise of the Silver Surfer – Promotion

Mint became aware of the promotion, it notified the studio and the Franklin Mint that it was breaking the law by turning government-issued currency into private advertising

Mad Men – Online promotion

Promotion for Seasons 3 and 4 included “Mad Men Yourself”, an interactive game in which the user can choose clothing and accessories for an avatar similar to the appearance of Mad Men characters, drawn in the sixties-inspired style of illustrator Dyna Moe

Election promise

The examples and perspective in this article deal primarily with the United States and do not represent a worldwide view of the subject.

Election promise

An election promise is a promise made to the public by a politician who is trying to win an election. They have long been a central element of elections and remain so today. Election promises are also notable for often being broken once a politician is in office.

Election promise

Elections promises are part of an election platform, but platforms also contain vague ideals and generalities as well as specific promises. They are an essential element in getting people to vote for a candidate. For example, a promise such as to cut taxes or to introduce new social programs may appeal to voters.

Election promise – Broken promises

Popular cynicism and 24 hour media has increased the publics perception of ‘lies’ and broken promises since 1945, despite the actual amount of promises broken remaining roughly level at less than 20% over that time.

Election promise – Broken promises

In the 2003 provincial election in Ontario, Canada, the Liberal Party also made all three promises and raised taxes once it found itself in government with an unbalanced budget.

Election promise – Broken promises

Promises are usually based on the rosiest of possible futures, a strong economy and cooperative leaders of legislatures and sub-national entities. Actual government planning done by bureaucrats generally plans for the worst possible future, but any politician that would plan in this manner would have a platform that is far less attractive than that of their opponents.

Election promise – Broken promises

Adding caveats to promises based on economic performance would hurt the politician, and is also difficult to do in ten second news sound bites or thirty second commercials.

Election promise – Broken promises

There is some latitude for breaking promises. George W. Bush’s pledge to not involve the U.S. military in nation building was discarded after the September 11th attacks, a change in policy widely viewed as justifiable among his supporters. Franklin Roosevelt’s 1940 pledge to keep the United States out of World War II was similarly abandoned after the Pearl Harbor attack, prompting a voter backlash in the 1942 midterm elections.

Election promise – Broken promises

For instance in the United States a presidential candidate can freely make promises of an impractically large tax cut in the firm confidence that the Senate will reduce it to a manageable level.

Election promise – Broken promises

The constant stream of broken promises has annoyed many voters and politicians have responded with techniques to make their promises more believable. This includes making far more specific promises with numbers attached. The 1993 Canadian Liberal Red Book was an example of this. Also popular is setting a more specific time for when promises will be implemented, with politicians listing what they will do in their first week or first hundred days in office.

Election promise – Broken promises

When promises are to be broken, all politicians know it is best to do so at the start of a term. Thus, the first budget is the one most likely to see unexpected tax hikes or slashed spending. The hope is that by the time the next election occurs in a few years’ time, the anger of the electorate will have faded.

Election promise – Broken promises

Similarly politicians often save popular, but relatively unimportant promises, for the end of their term to be implemented just before they are up for reelection while the electors still remember them.

Election promise – Case study: Richard Nixon’s Election promises

He never used the phrase “secret plan”, which originated with a reporter looking for a lead to a story summarizing the Republican candidate’s (hazy) promise to end the war without losing

Election promise – Case study: Richard Nixon’s Election promises

According to one historian, “it became obvious in 1969 that Nixon’s ‘secret plan’ to end the war was a campaign gimmick…”

Election promise – Case study: Richard Nixon’s Election promises

Another historian wrote: “Nixon never had a plan to end the war, but he did have a general strategy–to increase pressure on the communists [and] issue them a November 1, 1969 deadline to be conciliatory or else…The North Vietnamese did not respond to Nixon’s ultimatum…and his aides began planning Operation Duck Hook.”

Election promise – Case study: Richard Nixon’s Election promises

Nixon told Michigan Republican congressman Donald Riegle that the war would be over within six months of his assumption of office.

Election promise – Case study: Richard Nixon’s Election promises

As this six month deadline approached, in May 1969, Henry Kissinger asked a group of Quakers to give the administration six more months. “Give us six months, and if we haven’t ended the war by then, you can come back and tear down the White House fence.”

Election promise – Case study: Richard Nixon’s Election promises

The election promises of the Nixon administration had positive results for the White House

Election promise – Case study: Richard Nixon’s Election promises

The executive producer of the ABC evening news, Av Westin, wrote a memo in March 1969 that stated:

Election promise – Case study: Richard Nixon’s Election promises

“I have asked our Vietnam staff to alter the focus of their coverage from combat pieces to interpretive ones, pegged to the eventual pull-out of the American forces. This point should be stressed for all hands.”

Election promise – Case study: Richard Nixon’s Election promises

And Westin telexed the ABC network’s Saigon bureau:

Election promise – Case study: Richard Nixon’s Election promises

“I think the time has come to shift some of our focus from the battlefield, or more specifically American military involvement with the enemy, to themes and stories under the general heading ‘We Are on Our Way Out of Vietnam.'”

Election promise – Case study: Richard Nixon’s Election promises

American combat deaths for the first half of 1969 increased rather than decreased during the time in which the plan was allegedly being implemented.

Election promise – Case study: Richard Nixon’s Election promises

In 1972, Nixon also promised that “peace is at hand”. On January 27, 1973, at the beginning of Nixon’s second term, representatives of the US, North Vietnam, South Vietnam and the Viet Cong signed the Paris Peace Accords, which formally ended US involvement in the war.

Election promise – Case study: Richard Nixon’s Election promises

The Nixon Administration six month’s promise is similar to the Philippine-American War 1900 promise of Republicans who pledged that the fighting in the Philippines would end within sixty days of McKinley’s re-election. It, however, took a lot longer.

Election promise – Lists of broken promises (not exhaustive)

The British Liberal Party’s pledge to cut military spending, before embarking on the Dreadnought arms race with Germany

Election promise – Lists of broken promises (not exhaustive)

The British Labour Party’s 1945 pledge to set up a new ministry of housing

Election promise – Lists of broken promises (not exhaustive)

Australian Prime Minister Bob Hawke, in 1987, said that “by 1990 no Australian child will be living in poverty”

Election promise – Lists of broken promises (not exhaustive)

George H. W. Bush promised not to raise taxes while president during his 1988 campaign. This was best remembered in a speech at the Republican National Convention when he said “Congress will push and push…and I’ll say Read my lips: no new taxes”. After a recession began during his term and the deficit widened, Bush agreed to proposals to increase taxes. Although not the only broken promise concerning taxes, it was by far the most famous.

Election promise – Lists of broken promises (not exhaustive)

In 1994, upon entering Italian politics, media tycoon Silvio Berlusconi promised that he would sell his assets in Fininvest (later Mediaset), because of the conflict of interest it would have generated, a promise he repeated a number of times in later years, but after 12 years and having served three terms as prime minister, he still retains ownership of his company that controls virtually all the Italian private TV stations and a large number of magazines and publishing houses, which have extensively been used in favour of his political party

Election promise – Lists of broken promises (not exhaustive)

Australian Prime Minister John Howard in 1995 that the GST would “never ever” be part of Liberal policy (the tax package was not implemented that term but was put to the Australian people at the next election in 1998 that re-elected Howard)

Election promise – Lists of broken promises (not exhaustive)

In Ireland, Fianna Fáil’s 2002 election promise to “permanently end all hospital waiting lists” by 2004 and to “create a world class health service” through reform and expanding healthcare coverage with “200,000 extra medical cards”.]] Those with medical cards dropped by over 100,000, and waiting lists are still a major issue.

Election promise – Lists of broken promises (not exhaustive)

The Liberal Democrats’ pledge not to increase tuition fees, whereupon it formed a coalition with the Conservative Party and soon after voted for an increase in tuition fees.

Election promise – Lists of broken promises (not exhaustive)

When asked about the issue of carbon taxation, Prime Minister Julia Gillard responded by saying “There will be no carbon tax under a government I lead, but lets be absolutely clear. I am determined to price carbon”. In February 2011, Gillard then announced a carbon pricing mechanism in order to secure a minority government. This has been construed by some as being a broken promise, with debate centering on whether or not a fixed price leading into a trading scheme can be called a ‘tax’.

Election promise – Notes

116 “Nixon didn’t invent the phrase, which originated with a reporter looking for a lead to a story summarizing the Republican candidate’s (hazy) promise to end the war without losing

Election promise – Notes

Morin, Relman (March 14, 1968). “Nixon Plans to Unfold Peace Plan When He Campaigns Against LBJ”. Press Telegram (Long Beach, Cal.). p. 10.

Election promise – Notes

Small, Melvin (April 1988). Johnson, Nixon, and the Doves. Rutgers University Press. ISBN 0-8135-1288-3. p. 174; Zaroulis, Nancy and Gerald Sullivan (1984). Who Spoke Up? American Protest Against the War in Vietnam, 1963-1975. Doubleday. ISBN 0-03-005603-9. p. 217

Election promise – Notes

Strauss, Robert S. (Summer, 1984). “What’s Right with U. S. Campaigns”. Foreign Policy 55: 15.

Election promise – Notes

See U.S. presidential election, 1900 Misleading Philippine War claims by the Republicans

Election promise – Notes

Small, p. 166; Riegle, Don (1972). O Congress. Doubleday. p. 20; Kalb, Marvin and Bernard (1974). Kissinger. Hutchison. ISBN. p. 120; Hersh, Seymour M. (1983). The Price of Power: Kissinger in the Nixon White House. Summit Books. ISBN 0-671-44760-2. p. 119

Election promise – Notes

Solomon, Norman (December 22, 2005). “A New Phase of Bright Spinning Lies About Iraq”. CommonDreams.org.

Radio-frequency identification – Promotion tracking

To prevent retailers diverting products, manufacturers are exploring the use of RFID tags on promoted merchandise so that they can track exactly which product has sold through the supply chain at fully discounted prices.

Bluetooth Special Interest Group – Promoter members

These members are the most active in the SIG and have considerable influence over both the strategic and technological directions of Bluetooth as a whole. The current promoter members are:

Bluetooth Special Interest Group – Promoter members

Nokia (founder member)

Bluetooth Special Interest Group – Promoter members

Toshiba (founder member)

Bluetooth Special Interest Group – Promoter members

Each Promoter member has one seat (and one vote) on the Board of Directors and the Qualification Review Board (the body responsible for developing and maintaining the qualification process). They each may have multiple staff in the various working groups and committees that comprise the work of the SIG.

Bluetooth Special Interest Group – Promoter members

The SIG’s website carries a full list of members].

Prometric

Prometric’s corporate headquarters are located in Canton (Baltimore, Maryland) in the United States.

Prometric – History

Prometric is currently a wholly owned, independently operated subsidiary of ETS, allowing ETS to maintain non-profit status.

Prometric – Business

For example, despite the fact that Prometric test centers exist worldwide, some exams are only offered in the country where the client program exists

Prometric – Business

In 2009, the company was involved in a controversy due to widespread technical problems on one of India’s MBA entrance exams, the Common Admission Test. While Prometric claims that the problems were due to common viruses, this claim was disputed since these tests were not internet-based and were rather offered on local area networks within India, where the virus was pre-existent. Due to this controversy Prometric allowed 8000 students to reappear for the examination.

Prometric – International

In the Republic of Ireland, Prometric’s local subsidiary are responsible for administering the Driver Theory Test.

Payment Card Industry Data Security Standard – Compliance and compromises

Much of this confusion is a result of the 2008 Heartland Payment Systems breach, wherein more than one hundred million card numbers were compromised

Payment Card Industry Data Security Standard – Compliance and compromises

Therefore, these frequently cited breaches and their pointed use as a tool for criticism even to the point of noting that Hannaford Brothers had, in fact, received its PCI DSS compliance validation one day after it had been made aware of a two-month long compromise of its internal systems; fail to appropriately assign blame in their blasting of the standard itself as flawed as opposed to the more truthful breakdown in merchant and service provider compliance with the written standard, albeit in this case having not been identified by the assessor.

Payment Card Industry Data Security Standard – Compliance and compromises

At the same time 80% of payment card compromises since 2005 affected Level 4 merchants.

Think Different – Promotional posters

Promotional posters from the campaign were produced in small numbers in 24 x 36 inch sizes. They featured the portrait of one historic figure, with a small Apple logo and the words “Think Different” in one corner. The posters were produced between 1997 and 1998.

Think Different – Promotional posters

14th Dalai Lama (never officially released due to licensing issues and the politically sensitive nature)

Think Different – Promotional posters

Bob Dylan (Never officially released due to licensing issues)

Think Different – Promotional posters

In addition, around the year 2000, Apple produced the ten, 11×17 poster set often referred to as “The Educators Set”, which was distributed through their Education Channels. Apple sent out boxes (the cover of which is a copy of the ‘Crazy Ones’ original TD poster) that each contained 3 packs (sealed in plastic) of 10 small/miniature Think Different posters.

Think Different – Promotional posters

During a special event held on October 14, 1998 at the Flint Center in Cupertino California, a limited edition 11″ x 14″ softbound book was given to employees and affiliates of Apple Computer, Inc. to commemorate the first year of the ad campaign. The 50 page book contained a foreword by Steve Jobs, the text of the original Think Different ad, and illustrations of many of the posters used in the campaign along with narratives describing each person.

Food and Drug Administration – Advertising and promotion

The FDA’s Office of Prescription Drug Promotion reviews and regulates prescription drug advertising and promotion through surveillance activities and issuance of enforcement letters to pharmaceutical manufacturers. Advertising and promotion for over-the-counter drugs is regulated by the Federal Trade Commission.

Food and Drug Administration – Advertising and promotion

The drug advertising regulation contains two broad requirements: (1) a company may advertise or promote a drug only for the specific indication or medical use for which it was approved by FDA. Also, an advertisement must contain a “fair balance” between the benefits and the risks (side effects) of a drug.

Food and Drug Administration – Advertising and promotion

The term off-label refers to drug usage for indications other than those approved by the FDA.

Computational problem – Promise problems

In computational complexity theory, it is usually implicitly assumed that any string in {0, 1}* represents an instance of the computational problem in question. However, sometimes not all strings {0, 1}* represent valid instances, and one specifies a proper subset of {0, 1}* as the set of “valid instances”. Computational problems of this type are called promise problems.

Computational problem – Promise problems

Here, the valid instances are those graphs whose maximum independent set size is either at most 5 or at least 10.

Computational problem – Promise problems

Decision promise problems are usually represented as pairs of disjoint subsets (Lyes, Lno) of {0, 1}*. The valid instances are those in Lyes ? Lno. Lyes and Lno represent the instances whose answer is yes and no, respectively.

Computational problem – Promise problems

Promise problems play an important role in several areas of computational complexity, including hardness of approximation, property testing, and interactive proof systems.

Merrill Lynch – Rise to prominence

Merrill Lynch rose to prominence on the strength of its brokerage network (15,000+ as of 2006), sometimes referred to as the “thundering herd”, that allowed it to place securities it underwrote directly

Elizabeth P. Hoisington – Promotion to Brigadier General

On May 15, 1970, President Nixon announced the first women selected for promotion to brigadier general: Anna Mae Hays, Chief of the Army Nurse Corps, and Hoisington.Associated Press, , May 16, 1970

Elizabeth P. Hoisington – Promotion to Brigadier General

On June 11, 1970, the two women were promoted. Robert A. Dobkin, Associated Press, , Schenectady Gazette, June 12, 1970 According to the , Hays was the first woman in the United States Armed Forces to wear the insignia of a brigadier general. Hays and Hoisington were promoted on the same day within minutes of each other.Associated Press, , The Spokane Spokesman-Review, June 12, 1970

Elizabeth P. Hoisington – Promotion to Brigadier General

The Hoisington and Hays promotions resulted in positive public relations for the Army, including appearances on the Dick Cavett Show|Dick Cavett, David Frost and Today (U.S. TV program)|Today shows. Hoisington, who was noted for her quick smile and ebullient personality, also appeared as a mystery guest on the popular game show What’s My Line?Matt Schudel, , August 24, 2007, at

Elizabeth P. Hoisington – Promotion to Brigadier General

Hoisington retired on August 1, 1971.New York Times, , August 1, 1971

Dreyfus’ critique of AI – The grandiose promises of artificial intelligence

In Alchemy and AI #|(1965) and What Computers Can’t Do #|(1972), Hubert Dreyfus|Dreyfus summarized the history of Artificial Intelligence and ridiculed the unbridled optimism that permeated the field. For example, Herbert A. Simon, following the success of his program General Problem Solver #|(1957), predicted that by 1967:

Dreyfus’ critique of AI – The grandiose promises of artificial intelligence

# A computer would be world champion in chess.

Dreyfus’ critique of AI – The grandiose promises of artificial intelligence

# A computer would discover and prove an important new mathematical theorem.

Dreyfus’ critique of AI – The grandiose promises of artificial intelligence

# Most theories in psychology will take the form of computer programs.

Dreyfus’ critique of AI – The grandiose promises of artificial intelligence

Dreyfus felt that this optimism was totally unwarranted. He believed that they were based on false assumptions about the nature of human intelligence. Pamela McCorduck explains Dreyfus position:

Dreyfus’ critique of AI – The grandiose promises of artificial intelligence

great misunderstanding accounts for public confusion about thinking machines, a misunderstanding perpetrated by the unrealistic claims researchers in AI have been making, claims that thinking machines are already here, or at any rate, just around the corner.

Dreyfus’ critique of AI – The grandiose promises of artificial intelligence

These predictions were based on the success of an information processing model of the mind, articulated by Newell and Simon in their physical symbol systems hypothesis, and later expanded into a philosophical position known as computationalism by philosophers such as Jerry Fodor and Hillary Putnam

Amelia Earhart – Promoting aviation

In 1929, Earhart was among the first aviators to promote commercial air travel through the development of a passenger airline service; along with Charles Lindbergh, she represented Transcontinental Air Transport (TAT) and invested time and money in setting up the first regional shuttle service between New York and Washington, DC

Friendly artificial intelligence – Promotion and support

Promoting Friendly AI is one of the primary goals of the Machine Intelligence Research Institute, along with obtaining funding for, and ultimately creating a seed AI program implementing the ideas of Friendliness theory.

Friendly artificial intelligence – Promotion and support

Several notable future studies|futurists have voiced support for Friendly AI, including author and inventor Raymond Kurzweil, medical life-extension advocate Aubrey de Grey, and World Transhumanist Association co-founder (with David Pearce (philosopher)|David Pearce) Nick Bostrom.

Frankenstein – Modern Prometheus

When Zeus discovered this, he sentenced Prometheus to be eternally punished by fixing him to a rock of Caucasus, where each day an eagle would peck out his liver, only for the liver to regrow the next day because of his immortality as a god

Frankenstein – Modern Prometheus

In particular, he was regarded in the Romantic era as embodying the lone genius whose efforts to improve human existence could also result in tragedy: Mary Shelley, for instance, gave The Modern Prometheus as the subtitle to her novel Frankenstein.” Mary Shelley seemingly titled the book after the conflicted principles of knowledge in the story symbolising Victor as the Modern Prometheus.

Frankenstein – Modern Prometheus

The Titan in the Greek mythology of Prometheus parallels Victor Frankenstein. Victor’s work by creating man by new means reflects the same innovative work of the Titan in creating humans.

Frankenstein – Modern Prometheus

Some have claimed that for Mary Shelley, Prometheus was not a hero but rather something of a devil, whom she blamed for bringing fire to man and thereby seducing the human race to the vice of eating meat (fire brought cooking which brought hunting and killing).(Leonard Wolf, p. 20).

Frankenstein – Modern Prometheus

Byron was particularly attached to the play Prometheus Bound by Aeschylus, and Percy Shelley would soon write his own Prometheus Unbound (Shelley)|Prometheus Unbound (1820). The term Modern Prometheus was actually coined by Immanuel Kant, referring to Benjamin Franklin and his then recent experiments with electricity. Benjamin Franklin in London. The Royal Society. Retrieved 8 August 2007.

Computationalism – Prominent scholars

* Daniel Dennett proposed the Multiple Drafts Model, in which consciousness seems linear but is actually blurry and gappy, distributed over space and time in the brain. Consciousness is the computation, there is no extra step or Cartesian Theater in which you become conscious of the computation.

Computationalism – Prominent scholars

* Jerry Fodor argues that mental states, such as beliefs and desires, are relations between individuals and mental representations

Computationalism – Prominent scholars

* David Marr (psychologist)|David Marr proposed that cognitive processes have three levels of description: the computational level (which describes that computational problem (i.e., input/output mapping) computed by the cognitive process); the algorithmic level (which presents the algorithm used for computing the problem postulated at the computational level); and the implementational level (which describes the physical implementation of the algorithm postulated at the algorithmic level in biological matter, e.g

Computationalism – Prominent scholars

* Ulric Neisser coined the term ‘cognitive psychology’ in his book published in 1967 (Cognitive Psychology), wherein Neisser characterizes people as dynamic information-processing systems whose mental operations might be described in computational terms.

Computationalism – Prominent scholars

* Steven Pinker described a language instinct, an evolved, built-in capacity to learn speech (if not writing).

Computationalism – Prominent scholars

* Hilary Putnam proposed functionalism (philosophy of mind) to describe consciousness, asserting that it is the computation that equates to consciousness, regardless of whether the computation is operating in a brain, in a computer, or in a brain in a vat.

Computationalism – Prominent scholars

* Georges Rey, professor at the University of Maryland, builds on Jerry Fodor’s representational theory of mind to produce his own version of a Computational/Representational Theory of Thought.

60 Minutes – Viacom/CBS cross-promotion

In recent years, the show has been accused of promoting books, films, and interviews with celebrities who are published or promoted by sister businesses of media conglomerate Viacom (original)|Viacom (which owned CBS from 2000 to 2005) and publisher Simon Schuster (which remains a part of CBS Corporation after the 2005 CBS/Viacom split), without disclosing the journalistic conflict-of-interest to viewers., Bryan Preston and Chris Regan, National Review, April 2, 2004.

PostgreSQL – Prominent users

* Yahoo! for web user behavioral analysis, storing two petabytes and claimed to be the largest data warehouse using a heavily modified version of PostgreSQL with an entirely different Column-oriented DBMS|column-based storage engine and different query processing layer

PostgreSQL – Prominent users

* In 2009, social networking website MySpace used Aster Data Systems’s nCluster database for data warehousing, which was built on unmodified PostgreSQL.

PostgreSQL – Prominent users

* State Farm uses PostgreSQL on their Aster Data Systems’s nCluster Analytics server.

PostgreSQL – Prominent users

* Geni.com uses PostgreSQL for their main genealogy database.

PostgreSQL – Prominent users

* Sony Online multiplayer online games.

PostgreSQL – Prominent users

* BASF, shopping platform for their agribusiness portal.

PostgreSQL – Prominent users

* Skype VoIP application, central business databases.

PostgreSQL – Prominent users

* Sun xVM, Sun’s virtualization and datacenter automation suite.

PostgreSQL – Prominent users

* MusicBrainz, open online music encyclopedia.

PostgreSQL – Prominent users

* International Space Station for collecting telemetry data in orbit and replicating it to the ground.

PostgreSQL – Prominent users

* Instagram, a popular mobile photo sharing service

PostgreSQL – Prominent users

* Disqus, an online discussion and commenting service

New York Times – Fashion news articles promoting advertisers

In the mid to late 1950s, fashion writer..

Xbox Live Indie Games – Promotions

Developers have come together to promote Xbox Live Indie Games with community driven promotions featuring select games, called the Indie Games Uprising. To date there have been three uprisings, the XBLIG Winter Uprising, which took place during December 2010, and the XBLIG Summer Uprising which started on August 22, 2011, and the Uprising III scheduled to start on September 10, 2012.

PlayStation: The Official Magazine – Mascots and promotion

In the beginning, PSM had an anime-style mascot named Banzai Chibi-Chan, created and illustrated by Robert DeJesus. He was featured prominently in early issues and even inspired apparel and other accessories. He was later dropped, with the supposed reason being that the character was too childish and gave some the wrong impression about the magazine’s intended audience.

PlayStation: The Official Magazine – Mascots and promotion

A smiley|smiley face featuring an eye patch with a star on it was also used, but it too was eventually dropped after the magazine went through redesign in later years. The PSM Smiley Face was notable for its appearance throughout the magazine, as well as on lid-sticker inserts (large, circular stickers that could be placed decoratively on the lid of a PlayStation (console)|PlayStation console), including one found in the first issue.

PlayStation: The Official Magazine – Mascots and promotion

Some lid-stickers promotionally featured characters from PlayStation (console)|PS1 games being covered in the magazine. Other inserts included PS1 memory card label stickers featuring visual themes similar to the lid-stickers, as well as Video Game tip sheets, instead of the Game demo|demo discs that then-competitor Official U.S. PlayStation Magazine was known for.

PlayStation: The Official Magazine – Mascots and promotion

PTOM also had promotional pullout-style posters from time to time, to help advertise upcoming Video Game releases.

Xbox (console) – Promotion

In 2002 the Independent Television Commission (ITC) banned a television advertisement for the Xbox in the United Kingdom after complaints that it was highly distasteful, violent, scary and upsetting

Nintendo 64 – Promotion

90 different tips were available, with three variations of 30 tips each.Promotions: Mills Gets Foot Up with Nintendo Link-up

Nintendo 64 – Promotion

Nintendo advertised its Funtastic Series of peripherals with a $10 million print and television campaign from February 28 to April 30, 2000. Leo Burnett, Chicago, was in charge.Wasserman, Todd. Nintendo: Pokemon, Peripherals Get $30M. Brandweek 41.7 (2000): 48. Business Source Complete. Web. 24 July 2013.

History of video games – Online gaming rises to prominence

As affordable broadband Internet connectivity spread, many publishers turned to online gaming as a way of innovating

Digg – Organized promotion and censorship by users

It has been possible for users to have disproportionate influence on Digg, either by themselves or in teams. These users are sometimes motivated to promote or bury pages for political or financial reasons.

Digg – Organized promotion and censorship by users

Serious attempts by users to game the site began in 2006. A top user was banned after agreeing to promote a story for cash to an undercover Digg sting operation. Another group of users openly formed a ‘Bury Brigade’ to remove spam articles about US politician Ron Paul; critics accused the group of attempting to stifle any mention of Ron Paul on Digg.

Digg – Organized promotion and censorship by users

Digg hired computer scientist Anton Kast to develop a diversity algorithm that would prevent special interest groups from dominating Digg. During a town hall meeting, Digg executives responded to criticism by removing some features that gave superusers extra weight, but declined to make buries transparent.

Digg – Organized promotion and censorship by users

However, later that year Google increased its page rank for Digg. Shortly, many ‘pay for Diggs’ startups were created to profit from the opportunity. According to TechCrunch, one top user charged $700 per story, with a $500 bonus if the story reached the front page.

Digg – Organized promotion and censorship by users

Digg Patriots was a conservative Yahoo! Groups mailing list, with an associated page on coRank, accused of coordinated, politically motivated behavior on Digg

Sony Tablet – Promotional videos

On 15 June 2011, Sony released the first in a series of five videos titled Two Will, promoting and featuring the Tablets in an elaborately designed Rube Goldberg Machine. The episodes are entitled:

Viral video – Band and music promotion

YouTube has become a means of promoting bands and their music. Many independent musicians, as well as large companies such as Universal Music Group, use YouTube to promote videos.

Viral video – Band and music promotion

A video broadcasting the Free Hugs Campaign, with accompanying music by the Sick Puppies, led to instant fame for both the band and the campaign,2006 YouTube Video Awards Free Hugs wins in most inspirational category

Secure server – In case of compromised secret (private) key

An important property in this context is perfect forward secrecy (PFS)

Secure server – In case of compromised secret (private) key

A certificate may be revoked before it expires, for example because the secrecy of the private key has been compromised

Jurassic Park (film) – Release and promotion

Universal spent $65 million on the marketing campaign for Jurassic Park, making deals with 100 companies to market 1,000 products. These included Jurassic Park video games|three Jurassic Park video games by Sega and Ocean Software, a toy line by Kenner that was distributed by Hasbro, and a novelization aimed at young children.

Jurassic Park (film) – Release and promotion

The film’s trailers only gave fleeting glimpses of the dinosaurs, a tactic journalist Josh Horowitz described as that old Spielberg axiom of never revealing too much when Spielberg and director Michael Bay did the same for their production of Transformers (film)|Transformers in 2007. The film was marketed with the tagline An Adventure 65 Million Years In The Making. This was a joke Spielberg made on set about the genuine, thousands of years old mosquito in amber used for Hammond’s walking stick.

Jurassic Park (film) – Release and promotion

The film premiered at the National Building Museum on June 9, 1993, in Washington, D.C., in support of two children’s charities. Two days later it opened nationwide, in 2,404 theater locations and an estimated 3,400 screens.

Jurassic Park (film) – Release and promotion

Following the film’s release, a traveling exhibition began. Steve Englehart wrote a series of comic books published by Topps Comics. They acted as a continuation of the film, consisting of the two-issue Raptor, the four-issue Raptors Attack and Raptors Hijack, and Return to Jurassic Park, which lasted nine issues. All published issues were republished under the single title Jurassic Park Adventures in the United States and as Jurassic Park in the United Kingdom.

Jurassic Park (film) – Release and promotion

Jurassic Park was broadcast on television for the first time on May 7, 1995, following the April 26 airing of The Making of Jurassic Park

Jurassic Park (film) – Release and promotion

The Jurassic Park: The Ride (Universal Studios Hollywood)|Jurassic Park Ride went into development in November 1990 and premiered at Universal Studios Hollywood on June 15, 1996, at a cost of $110 million

The Lord of the Rings Online: Helm’s Deep – Promotions Rewards

The Promotions system only uses the highest value for determining points, so the player only has to achieve Platinum once to earn the maximum points for each quest.

The Lord of the Rings Online: Helm’s Deep – Promotions Rewards

If a player wants to put extreme emphasis on one promotion tree, they can spend points solely on that tree, but they can also achieve a mix of promotions for all three roles if they so choose instead

The Lord of the Rings Online: Helm’s Deep – Promotions Rewards

In addition to the Promotions window is the Expertise panel. These skills are unlocked by spending points in the trees in the right-hand panel. The deeper the player goes into a given line, the more Expertise traits are unlocked in that line. Expertise traits are unlocks that accentuate their given line – new ammo types for Engineers, more order types for Officers, and different damaging effects for Vanguards.

Escape character – Windows Command Prompt

The cmd.exe|Windows command-line interpreter uses a caret character (^) to escape reserved characters that have special meanings (in particular: amp; | ( ) lt; gt; ^). The COMMAND.COM|DOS command-line interpreter, though it supports similar syntax, does not support this.

Escape character – Windows Command Prompt

For example, on the Windows Command Prompt, this will result in a syntax error.

Escape character – Windows Command Prompt

whereas this will output the string: lt;wikigt;

PlayStation Move – Promotion

As part of the promotional marketing for Sorcery (video game)|Sorcery, the PlayStation Move controller was inducted into The Magic Circle museum by Vice President Scott Penrose.

Mercosur – Reciprocal promotion and protection

In this context, Argentina, Uruguay, Paraguay and Brazil signed on January 1, 1994 in the city of Colonia del Sacramento, Uruguay, the Colonia Protocol for the Reciprocal Promotion and Protection of Mercosur Investments (Colonia Protocol)

RFID – Promotion tracking

To prevent retailers diverting products, manufacturers are exploring the use of RFID tags on promoted merchandise so that they can track exactly which product has sold through the supply chain at fully discounted prices.

ITunes Store – Promotions

The promotion was repeated beginning January 31, 2005, with 200 million songs available, and an iPod Mini given away every hour.

ITunes Store – Promotions

On July 1, 2004, Apple announced that, starting with the sale of the 95 millionth song, an iPod would be given away to the buyer of each 100 thousandth song, for a total of 50 iPods. The buyer of the 100 millionth song would receive a PowerBook, iPod, and US$10,000 gift certificate to the iTunes Music Store.

ITunes Store – Promotions

Ten days later, on July 11, Apple announced that 100 million songs had been sold through the iTunes Music Store. The 100 millionth song was titled Somersault (Danger Mouse|Dangermouse Remix) by Zero 7, purchased by Kevin Britten of Hays, Kansas. He then received a phone call from Apple CEO Steve Jobs, who offered his congratulations, as well as a 40GB 3rd Generation iPod laser-engraved with a message of thanks.

ITunes Store – Promotions

Inspired by Pepsi’s marketing success with iTunes giveaways, Coca-Cola partnered with 7-Eleven to give away a free iTunes song with every

ITunes Store – Promotions

On July 5, 2005, Apple announced that they were counting down to half a billion songs

ITunes Store – Promotions

On July 28, 2005, Apple and Gap (clothing retailer)|The Gap announced a promotion to award iTunes music downloads to Gap customers who tried on a pair of Gap jeans. From August 8 to 31, 2005, each customer who tried on any pair of Gap jeans could receive a free download for a song of their choice from the iTunes Music Store.

ITunes Store – Promotions

On February 7, 2006, Apple announced that they were counting down to the billionth song download and began a promotion similar to the previous 100 million and 500 million countdown

ITunes Store – Promotions

In addition, the promotion caused discontent among international students, as the code was only valid in the US iTunes Music Store.

ITunes Store – Promotions

On April 10, 2009, Apple announced that it will be counting down to the billionth app. Apps being the applications for iPod Touch and iPhone. Launching a counter that is constantly running on Good Friday, Apple starting counting down. Connor Mulcahey, age 13 of Weston, CT, downloaded the billionth app, Bump (application)|Bump by Bump Technologies, and will receive a Macbook Pro 17, a 32GB iPod Touch, a Time Capsule, and a $10,000 Gift Card for the iTunes store.

ITunes Store – Promotions

On February 11, 2010 Apple announced that it would be counting down to 10 billion songs downloaded. A $10,000 gift card was offered as a prize. On February 24, 2010, the 10 billionth song, Guess Things Happen That Way by Johnny Cash, was purchased by Louie Sulcer of Woodstock, Georgia.

HBO GO – National expansion, innovation and rise to prominence (1975–1993)

On September 30, 1975, HBO became the first television network to continuously deliver its signal via Communications satellite|satellite when it broadcast the Thrilla in Manila boxing match between Muhammad Ali and Joe Frazier

HBO GO – National expansion, innovation and rise to prominence (1975–1993)

HBO broadcast for only nine hours each day, from 3 p.m

HBO GO – National expansion, innovation and rise to prominence (1975–1993)

In 1983, HBO’s first original movie and the first made-for-pay-TV movie The Terry Fox Story premiered. That year also saw the premiere of the first children’s program broadcast on the channel: Fraggle Rock. HBO continued to air various original programs aimed at children until 2001, when these programs almost completely moved over to HBO Family.

HBO GO – National expansion, innovation and rise to prominence (1975–1993)

HBO became involved in several legal suits during the 1980s

HBO GO – National expansion, innovation and rise to prominence (1975–1993)

In 1987, HBO launched HBO#Festival|Festival,Festival program guide 1987 a separate premium channel that featured Golden age (metaphor)|classic and recent hit movies, along with specials and documentaries from HBO

HBO GO – National expansion, innovation and rise to prominence (1975–1993)

In 1988, HBO’s subscriber base expanded greatly as a result of the Writers Guild of America 1988 Writers Guild of America strike|strike that year. HBO had new programming, while the broadcast networks could only air reruns of their shows. In 1989, HBO comparative advertising|compared programming against rival pay television network Showtime, with the slogan Simply the Best, using the Tina Turner single The Best (song)|The Best.

HBO GO – National expansion, innovation and rise to prominence (1975–1993)

On January 2, 1989, HBO launched Selecciones en Español de HBO y Cinemax (Spanish Selections from HBO and Cinemax) – an alternate Spanish-language feed of HBO and Cinemax

HBO GO – National expansion, innovation and rise to prominence (1975–1993)

Taking advantage of HBO, Warner Communications merged with HBO parent Time Inc. in 1989, creating Time Warner, which , remains the parent company of the network (coincidentally, Warner Communications had created rival The Movie Channel – which has been owned by CBS Corporation since 2006 – in the late 1970s before Viacom (original)|Viacom, which purchased a 50% stake in The Movie Channel in 1983, bought Warner’s remaining half-ownership of that network in 1985).

HBO GO – National expansion, innovation and rise to prominence (1975–1993)

In 1991, HBO and Cinemax became the first premium services to offer Multiplex (TV)|multiplexed channels to cable customers with the launch of HBO2 and Cinemax 2 on three cable systems in Wisconsin, Kansas and Texas.[http://www.highbeam.com/doc/1G1-10807607.html HBO: three channels are better than one], Multichannel News (via HighBeam Research), May 13, 1991

HBO GO – Rising prominence of original programming (1993–present)

During the 1990s, HBO began experiencing increasing success with its original series programming such as Tales from the Crypt (TV series)|Tales from the Crypt, Dream On (TV series)|Dream On, Tracey Takes On…, Mr

HBO GO – Rising prominence of original programming (1993–present)

One aspect as to the perceived higher quality of these shows is due to both the quality of the writing on the programs and the fact that as a subscription-only service, HBO does not carry normal commercials; instead the network runs promotions for upcoming HBO programs and behind-the-scenes featurettes between programs

HBO GO – Rising prominence of original programming (1993–present)

Beginning the 1997 launch of its first one-hour dramatic narrative series Oz (TV series)|Oz, HBO started a trend that became commonplace with premium cable providers

Left 4 Dead 2 – Promotion

PC and Xbox 360 players who pre-ordered Left 4 Dead 2 through participating retailers gained early access to the game’s demo, which was released on October 27, 2009 for Xbox Live and October 28, 2009 for PC players, and an exclusive baseball bat melee weapon to be used in game

Left 4 Dead 2 – Promotion

On October 5, 2009, Valve announced that Left 4 Dead 2 would be promoted by a $25million advertising campaign, exceeding the $10million that supported Left 4 Dead. The campaign includes television advertisements during sporting events, on billboards and magazines; and more aggressive advertising for Europe.

Digital media – Several design houses are active in this space, prominent names being

* John Lennon Educational Tour Bus

Last.fm – Full length promotional tracks and free downloads

30-second previews of any of the 12 million streamable tracks are available on demand, from anywhere in the site, by clicking on the grey arrow next to the name of the track or artist. Some tracks were also available to preview in full if the label or artist has specifically authorized it.

Last.fm – Full length promotional tracks and free downloads

More than 500,000 indie artists and labels have used the Last.fm Music Manager to upload more than 3 million tracks to be played on Last.fm’s radio, 2 Million can be played directly from artist and more than 1M of these songs are currently downloadable,[http://blog.last.fm/2011/07/01/lastfm-music-manager-powers-mp3coms-library-of-1m-promotional-downloads]

Digital Object Memory – SemProM

Funded by the German Ministry of Education and Research, the project SemProM (Semantic Product Memory, see www.semprom.org/) employs smart labels in order to give products a digital memory and thus support intelligent applications along the product’s Product lifecycle|lifecycle

Red Bull GmbH – Promotional cars

In addition to sport sponsorships, Red Bull has developed the MET (Mobile Energy Team) programme

Red Bull GmbH – Promotional Aircraft

The company uses numerous historic fixed wing and rotary wing aircraft[http://www.airliners.net/search/photo.search?airlinesearch=Red%20Bull Red Bull Aircraft (airliners photo collection)] in their promotions including:

Red Bull GmbH – Promotional Aircraft

* Chance-Vought F4U Corsair|F4U-4 List of surviving F4U Corsairs|96995 (OE-EAS)

Red Bull GmbH – Promotional Aircraft

* North American B-25 Mitchell|B-25J-30NC North American B-25 Survivors|44-86893 (N6123C)

Amazon tax – Compromise with Amazon.com

In response to resistance from Amazon.com, other online retailers, and anti-tax groups, the State of California agreed to a delay of one year before requiring online retailers to begin collecting sales tax on sales to California addresses

Amazon tax – Compromise with Amazon.com

Governor Jerry Brown said, This landmark legislation not only levels the playing field between online retailers and California’s brick-and-mortar businesses, it will also create tens of thousands of jobs and inject hundreds of millions of dollars back into critical services like education and public safety in future years.

Kraft Foods – Sponsorships and promotions

Kraft is an official partner and sponsor of Major League Soccer and sponsors the Kraft Nabisco Championship, one of the four Women’s major golf championships|majors on the LPGA tour. The company also sponsored the Fight Hunger Bowl|Kraft Fight Hunger Bowl, a post-season college football bowl game, from 2010-2012.

Kraft Foods – Sponsorships and promotions

Kraft HockeyVille is a Canadian reality television series developed by Canadian Broadcasting Corporation|CBC/SRC Sports and sponsored by Kraft Foods in which communities across Canada compete to demonstrate their commitment to the sport of ice hockey. The contest revolves around a central theme of community spirit in Canada and is directed by Mike Dodson.

Kraft Foods – Sponsorships and promotions

Kraft has released an App Store (iOS)|iPad app called Big Fork Little Fork which, in addition to games and other distractions, has information regarding how to use Kraft foods in nutritious ways. This app costs $1.99; a version for home computers is available on Apple’s App Store.

Kraft Foods – Sponsorships and promotions

Kraft is also involved in political sponsorship. According to The Guardian, Kraft helps to finance the State Policy Network. The State Policy Network characterizes itself as made up of free market think tanks – at least one in every state – fighting to limit government and advance market-friendly public policy.

Quincy Jones – 1960s breakthrough and rise to prominence

In 1964, Jones was promoted to vice-president of Mercury Records, becoming the first African-American to hold this executive position. In that same year, he turned his attention to film scores, another musical arena long closed to African-Americans. At the invitation of director Sidney Lumet, he composed the music for The Pawnbroker (film)|The Pawnbroker (1964). It was the first of his 33 major motion picture scores.

Quincy Jones – 1960s breakthrough and rise to prominence

Following the success of The Pawnbroker, Jones left Mercury Records and moved to Los Angeles

Quincy Jones – 1960s breakthrough and rise to prominence

In the 1960s, Jones worked as an arranger for some of the most important artists of the era, including Billy Eckstine, Sarah Vaughan, Frank Sinatra, Ella Fitzgerald, Peggy Lee, and Dinah Washington. Jones’s solo recordings also gained acclaim, including Walking in Space, Gula Matari, Smackwater Jack, You’ve Got It Bad, Girl, Body Heat (Quincy Jones album)|Body Heat, Mellow Madness, and I Heard That!!.

Quincy Jones – 1960s breakthrough and rise to prominence

He is known for his 1962 tune Soul Bossa Nova, which originated on the Big Band Bossa Nova album. Soul Bossa Nova was a theme used for the 1998 World Cup, the Canadian game show Definition (TV series)|Definition, the Woody Allen film Take the Money and Run and the Austin Powers (series)|Austin Powers film series. It was sampled by Canadian hip hop group Dream Warriors for their song, My Definition of a Boombastic Jazz Style.

Quincy Jones – 1960s breakthrough and rise to prominence

Jones produced all four million-selling singles for Lesley Gore during the early and mid-sixties, including It’s My Party (UK No.8; US No.1), Judy’s Turn To Cry (US No.5), She’s A Fool (also a US No.5) in 1963, and You Don’t Own Me (US No.2 for four weeks in 1964). He continued to produce for Gore until 1966, including the Ellie Greenwich|Greenwich/ Jeff Barry|Barry hit Look of Love (Lesley Gore song)|Look of Love (US No.27) in 1965.

Quincy Jones – 1960s breakthrough and rise to prominence

In 1975, Jones founded Qwest Productions, for which he arranged and produced hugely successful albums by Frank Sinatra and other major pop figures. In 1978, he produced the soundtrack for the musical adaptation of The Wonderful Wizard of Oz|The Wizard of Oz, The Wiz, starring Michael Jackson and Diana Ross. In 1982, Jones’s produced Michael Jackson’s all-time best-selling album Thriller (Michael Jackson album)|Thriller.[http://www.biography.com/people/quincy-jones-9357524?page=1]

Quincy Jones – 1960s breakthrough and rise to prominence

Jones’s 1981 album The Dude (Quincy Jones album)|The Dude yielded multiple hit singles, including Ai No Corrida (song)|Ai No Corrida (a remake of a song by Chaz Jankel), Just Once and One Hundred Ways, the latter two featuring James Ingram on lead vocals and marking Ingram’s first hits.

Quincy Jones – 1960s breakthrough and rise to prominence

In 1985, Jones wrote the score for the Steven Spielberg film adaptation of the Pulitzer prize winning epistolary novel The Color Purple (film)|The Color Purple by Alice Walker

Quincy Jones – 1960s breakthrough and rise to prominence

In 1988, Quincy Jones Productions joined forces with Warner Communications to create Quincy Jones Entertainment

Quincy Jones – 1960s breakthrough and rise to prominence

Starting in the late 1970s, Jones tried to convince Miles Davis to perform the music he had recorded on several classic albums of the 1960s, which had been arranged by Gil Evans

Quincy Jones – 1960s breakthrough and rise to prominence

In 1993, Jones collaborated with David Salzman to produce the concert extravaganza An American Reunion, a celebration of Bill Clinton’s inauguration as president of the United States

Quincy Jones – 1960s breakthrough and rise to prominence

In 2001, Jones published his autobiography, Q: The Autobiography of Quincy Jones

Nimbit – Marketing Promotion

Nimbit’s promotion tool for Facebook, Twitter, and Email was designed to drive fans to visit an artist’s Nimbit storefront with sharable interactive promotions that feature embedded video, an audio player, personal messages from artists, and a link to a free download that redeems at the artist’s storefront.www.digitalmusicnews.com/permalink/2012/120316facebook

Nimbit – Marketing Promotion

In September 2012, Nimbit added automatic follow up to their Promotion tool to encourage fans who accessed the original promotion to make a purchase citing evidence that artists who did follow up with new fans achieved far greater sales.allfacebook.com/nimbit-thank-you-rewards_b100775?utm_source=feedburnerutm_medium=feedutm_campaign=Feed%3A+allfacebook+%28Facebook+Blog%29

Nimbit – Marketing Promotion

Nimbit also provides analytics and sales reporting so you can understand your business and fans.

Nimbit – Disc manufacturing (CD/DVD) / Promotional Printing

Nimbit provides compact disc|CD and DVD replication and short-run CD-R and DVD-R duplication. The service includes prepress graphic work, UPC (UCC-issued barcode), Electronic PDF graphic proof, and assembly and wrapping. Nimbit also provides printing services for posters, flyers, cards, and other promotional materials.nimbit.discproductionservices.com/Quoter/index.aspx

Open data – Organisations promoting open data

* freeourdata.org.uk[http://www.freeourdata.org.uk/index.php Free our data] (The Guardian technology section)

Open data – Organisations promoting open data

* Open Data in the United Kingdom

Open data – Organisations promoting open data

* Open Knowledge Foundation

Open data – Organisations promoting open data

* [http://openstate.eu/ Open State Foundation]

Open data – Organisations promoting open data

* Scholarly Publishing and Academic Resources Coalition

Open data – Organisations promoting open data

* LinkedScience.orglinkedscience.org/about

Open data – Organisations promoting open data

* w3.org [http://esw.w3.org/topic/SweoIG/TaskForces/CommunityProjects/LinkingOpenData Linking Open Data on the Semantic Web]

Sales promotion

(The other six parts of the promotional mix are advertising, sales|personal selling, direct marketing, publicity/public relations, corporate image and exhibitions.) Media and non-media marketing communication are employed for a pre-determined, limited time to increase consumer demand, stimulate market demand or improve product availability

Sales promotion

Sales promotions can be directed at either the customer, sales staff, or distribution (business)|distribution channel members (such as retailers). Sales promotions targeted at the consumer are called ‘consumer sales promotions’. Sales promotions targeted at retailers and wholesalers|wholesale are called ‘trade sales promotions’. Some sale promotions, particularly ones with unusual methods, are considered gimmicks by many.

Sales promotion

Sales promotion includes several communications activities that attempt to provide added value or incentives to consumers, wholesalers, retailers, or other organizational customers to stimulate immediate sales. These efforts can attempt to stimulate product interest, trial, or purchase. Examples of devices used in sales promotion include coupons, samples, premiums, point-of-purchase (POP) displays, contests, rebates, and sweepstakes.

Sales promotion

Inside sales promotion activities includes window displays, product and promotional material display and promotional programs such as premium awards and contests.

Sales promotion – Consumer sales promotion techniques

*’Price deal’: A temporary reduction in the price, such as 50% off.

Sales promotion – Consumer sales promotion techniques

*Loyal Reward Program: Consumers collect points, miles, or credits for purchases and redeem them for rewards.

Sales promotion – Consumer sales promotion techniques

*Cents-off deal: Offers a brand at a lower price. Price reduction may be a percentage marked on the package.

Sales promotion – Consumer sales promotion techniques

*Price-pack deal: The packaging offers a consumer a certain percentage more of the product for the same price (for example, 25 percent extra).

Sales promotion – Consumer sales promotion techniques

*Coupons: coupons have become a standard mechanism for sales promotions.

Sales promotion – Consumer sales promotion techniques

*Loss leader: the price of a popular product is temporarily reduced below cost in order to stimulate other profitable sales

Sales promotion – Consumer sales promotion techniques

* Free-standing insert (FSI): A coupon booklet is inserted into the local newspaper for delivery.

Sales promotion – Consumer sales promotion techniques

* On-shelf couponing: Coupons are present at the shelf where the product is available.

Sales promotion – Consumer sales promotion techniques

* Checkout dispensers: On checkout the customer is given a coupon based on products purchased.

Sales promotion – Consumer sales promotion techniques

* On-line couponing: Coupons are available online. Consumers print them out and take them to the store.

Sales promotion – Consumer sales promotion techniques

* Mobile couponing: Coupons are available on a mobile phone. Consumers show the offer on a mobile phone to a salesperson for redemption.

Sales promotion – Consumer sales promotion techniques

* Online interactive promotion game: Consumers play an interactive game associated with the promoted product.

Sales promotion – Consumer sales promotion techniques

* rebate (marketing)|Rebates: Consumers are offered money back if the receipt and barcode are mailed to the producer.

Sales promotion – Consumer sales promotion techniques

* Contests/sweepstakes/games: The consumer is automatically entered into the event by purchasing the product.

Sales promotion – Consumer sales promotion techniques

** Aisle interrupter: A sign that juts into the aisle from the shelf.

Sales promotion – Consumer sales promotion techniques

** Dangler: A sign that sways when a consumer walks by it.

Sales promotion – Consumer sales promotion techniques

** Dump bin: A bin full of products dumped inside.

Sales promotion – Consumer sales promotion techniques

** Glorifier: A small stage that elevates a product above other products.

Sales promotion – Consumer sales promotion techniques

** YES unit: your extra salesperson is a pull-out fact sheet.

Sales promotion – Consumer sales promotion techniques

** Electroluminescent: Solar-powered, animated light in motion.[http://www.specialtyprinting.net/new-innovations/el-signage.php Electroluminescent Point of Purchase Signs]

Sales promotion – Consumer sales promotion techniques

* Kids eat free specials: Offers a discount on the total dining bill by offering 1 free kids meal with each regular meal purchased.

Sales promotion – Consumer sales promotion techniques

*Sampling: Consumers get one sample for free, after their trial and then could decide whether to buy or not.

Sales promotion – Trade sales promotion techniques

* Trade allowances: short term incentive offered to induce a retailer to stock up on a product.

Sales promotion – Trade sales promotion techniques

* Dealer loader: An incentive given to induce a retailer to purchase and display a product.

Sales promotion – Trade sales promotion techniques

* Trade contest: A contest to reward retailers that sell the most product.

Sales promotion – Trade sales promotion techniques

* Point-of-purchase displays: Used to create the urge of impulse buying and selling your product on the spot.

Sales promotion – Trade sales promotion techniques

* Training programs: dealer employees are trained in selling the product.

Sales promotion – Trade sales promotion techniques

* Push money: also known as spiffs. An extra commission paid to retail employees to push products.

Sales promotion – Trade sales promotion techniques

Trade discounts (also called functional discounts): These are payments to distribution channel members for performing some function .

Sales promotion – Retail Mechanics

Retailers have a stock number of retail ‘mechanics’ that they regularly roll out or rotate for new marketing initiatives.

Sales promotion – Retail Mechanics

* Buy a quantity for a lower price

Sales promotion – Retail Mechanics

* Get x% of discount on weekdays.

Sales promotion – Political issues

Sales promotions have traditionally been heavily regulated in many advanced industrial nations, with the notable exception of the United States

Sales promotion – Political issues

Most European countries also have controls on the scheduling and permissible types of sales promotions, as they are regarded in those countries as bordering upon unfair business practices. Germany is notorious for having the most strict regulations. Famous examples include the car wash that was barred from giving free car washes to regular customers and a baker who could not give a free cloth bag to customers who bought more than 10 rolls.

Windows 3.1x – Promotion and reception

Microsoft began a television advertising campaign for the first time on March 1, 1992

VK (social network) – Promotional use by bands and musicians

Musicians that use VK for promotion often upload their own tracks to their official VK pages. Notable examples include the Russian rapper Noize MC, as well as international celebrities like Tiësto, Shakira, Paul Van Dyk, The Prodigy or Dan Balan.vk.com/pages?oid=-2158488p=?????????_????_?????????

Universidad Autonoma de Madrid – Societies and compromise

The Autonomous University of Madrid has an active student body, having organised one of the Spain’s most important events against the dictatorship in 1976 called the Iberian Peoples Festival

Universidad Autonoma de Madrid – Societies and compromise

In recent years, UAM students have organised massively to protest against terrorism, after the assassination of Prof. Francisco Tomas y Valiente by ETA in 1995, against the Organic Law of Universities in 2001, to clean Spain’s northern coast after the Prestige oil spill in 2002, against the War in Iraq in 2003, to assist to the II European Social Forum also in 2003, and in solidarity with the victims of the 11th March 2004 11 March 2004 Madrid train bombings|Madrid train bombings.

Chegg – Green marketing promotion

Chegg has an arrangement with American Forests’ Global Releaf Program such that every book rented or sold means that one tree is planted. The firm claims that over five million trees have been planted.[http://www.chegg.com/ecofriendly/ Chegg.com][http://www.americanforests.org/global_releaf/ Ecofriendly]

Social network game – Social gaming as corporate promotion

The Walt Disney Company’s Disney Animal Kingdom Explorers was developed to create awareness of Disney’s theme parks and also promote conservation

Social network game – Social gaming as corporate promotion

Some large established Video Games developers are acquiring small operators to capitalize on the social gaming industry. The Walt Disney Company purchased social game developer Playdom for $763.0 million, and Electronic Arts purchased PopCap Games for $750.0 million in July 2011.

Prejudice – Controversies and prominent topics

One can be prejudiced against, or have a preconceived notion about someone due to any characteristic they find to be unusual or undesirable. A few commonplace examples of prejudice are those based on someone’s race, gender, nationality, social status, sexual orientation or religious affiliation, and controversies may arise from any given topic.

PROMETHEE

The ‘preference ranking organization method for enrichment of evaluations’ and its descriptive complement ‘geometrical analysis for interactive aid’ are better known as the ‘Promethee and Gaia’ methods.

PROMETHEE

Based on mathematics and sociology, the Promethee and Gaia method was developed at the beginning of the 1980s and has been extensively studied and refined since then.

PROMETHEE

It has particular application in decision making, and is used around the world in a wide variety of decision scenarios, in fields such as business, governmental institutions, transportation, healthcare and education.

PROMETHEE

Rather than pointing out a right decision, the Promethee and Gaia method helps decision makers find the alternative that best suits their goal and their understanding of the problem. It provides a comprehensive and rational framework for structuring a decision problem, identifying and quantifying its conflicts and synergies, clusters of actions, and highlight the main alternatives and the structured reasoning behind.

PROMETHEE – History

The basic elements of the Promethee method have been first introduced by Professor Jean-Pierre Brans (CSOO, VUB Vrije Universiteit Brussel) in 1982. It was later developed and implemented by Professor Jean-Pierre Brans and Professor Bertrand Mareschal (Solvay Brussels School of Economics and Management, ULB Université Libre de Bruxelles), including extensions such as GAIA.

PROMETHEE – History

The descriptive approach, named Gaia, allows the decision maker to visualize the main features of a decision problem: he/she is able to easily identify conflicts or synergies between criteria, to identify clusters of actions and to highlight remarkable performances.

PROMETHEE – History

Promethee has successfully been used in many decision making contexts worldwide. A non-exhaustive list of scientific publications about extensions, applications and discussions related to the Promethee methods was published in 2010.

PROMETHEE – Uses and applications

While it can be used by individuals working on straightforward decisions, the Promethee Gaia is most useful where groups of people are working on complex problems, especially those with several multi-criteria, involving a lot of human perceptions and judgments, whose decisions have long-term impact

PROMETHEE – Uses and applications

Decision situations to which the Promethee and Gaia can be applied include:

PROMETHEE – Uses and applications

* Choice – The selection of one alternative from a given set of alternatives, usually where there are multiple decision criteria involved.

PROMETHEE – Uses and applications

* Resource allocation – Allocating resources among a set of alternatives

PROMETHEE – Uses and applications

* Ranking – Putting a set of alternatives in order from most to least preferred

PROMETHEE – Uses and applications

* Conflict resolution – Settling disputes between parties with apparently incompatible objectives

PROMETHEE – Uses and applications

The applications of Promethee and Gaia to complex multi-criteria decision scenarios have numbered in the thousands, and have produced extensive results in problems involving planning, resource allocation, priority setting, and selection among alternatives. Other areas have included forecasting, talent selection, and tender analysis.

PROMETHEE – Uses and applications

Some uses of Promethee and Gaia have become case-studies. Recently these have included:

PROMETHEE – Uses and applications

* Deciding which resources are the best with the available budget to meet SPS quality standards (STDF – WTO) [See more in External Links]

PROMETHEE – Uses and applications

* Selecting new route for train performance (Italferr)[See more in External Links]

PROMETHEE – Assumptions

Let A=\ be a set of n actions and let F=\ be a consistent family of q criteria. Without loss of generality, we will assume that these criteria have to be maximized.

PROMETHEE – Assumptions

The basic data related to such a problem can be written in a table containing n\times q evaluations. Each line corresponds to an action and each column corresponds to a criterion.

PROMETHEE – Pairwise comparisons

d_k(a_i,a_j) is the difference between the evaluations of two actions for criterion f_k. Of course, these differences depend on the measurement scales used and are not always easy to compare for the decision maker.

PROMETHEE – Preference Degree

As a consequence the notion of preference function is introduced to translate the difference into a unicriterion preference degree as follows:

PROMETHEE – Preference Degree

where P_k:\R\rightarrow[0,1] is a positive non-decreasing preference function such that P_j(0)=0. Six different types of preference function are proposed in the original Promethee definition. Among them, the linear unicriterion preference function is often used in practice for quantitative criteria:

PROMETHEE – Preference Degree

where q_j and p_j are respectively the indifference and preference thresholds

PROMETHEE – Multicriteria preference degree

When a preference function has been associated to each criterion by the decision maker, all comparisons between all pairs of actions can be done for all the criteria. A multicriteria preference degree is then computed to globally compare every couple of actions:

PROMETHEE – Multicriteria preference degree

Where w_k represents the weight of criterion f_k. It is assumed that w_k\ge 0 and \sum_^q w_=1. As a direct consequence, we have:

PROMETHEE – Multicriteria preference flows

In order to position every action a with respect to all the other actions, two scores are computed:

PROMETHEE – Multicriteria preference flows

The Promethee I partial ranking is defined as the intersection of these two rankings

PROMETHEE – Multicriteria preference flows

Direct consequences of the previous formula are:

PROMETHEE – Multicriteria preference flows

The Promethee II complete ranking is obtained by ordering the actions according to the decreasing values of the net flow scores.

PROMETHEE – Unicriterion net flows

According to the definition of the multicriteria preference degree, the multicriteria net flow can be disaggregated as follows:

PROMETHEE – Unicriterion net flows

The unicriterion net flow, denoted \phi_(a_i)\in[-1;1], has the same interpretation as the multicriteria net flow \phi(a_i) but is limited to one single criterion. Any action a_i can be characterized by a vector \vec \phi(a_i) =[\phi_1(a_i),…,\phi_k(a_i),\phi_q(a_i)] in a q dimensional space. The GAIA plane is the principal plane obtained by applying a principal components analysis to the set of actions in this space.

PROMETHEE – Promethee I

Promethee I is a partial ranking of the actions. It is based on the positive and negative flows. It includes preferences, indifferences and incomparabilities (partial preorder).

PROMETHEE – Promethee II

Promethee II is a complete ranking of the actions. It is based on the multicriteria net flow. It includes preferences and indifferences (preorder).

Quantity discount – Prompt payment discount

Trade Discounts are deductions in price given by the wholesaler or manufacturer to the retailer at the list price or catalogue price. Cash Discounts are reductions in price given by the creditor to the debitor. These discounts are intended to speed payment and thereby provide liquidity to the firm. They are sometimes used as a promotion (marketing)|promotional device. we also explain that discount is relaxation in price.

Outcome-based education – Approaches to grading, reporting, and promoting

An important by-product of this approach is that students are assessed against external, absolute objectives, instead of reporting the students’ relative achievements

Outcome-based education – Approaches to grading, reporting, and promoting

Under OBE, teachers can use any objective grading system they choose, including letter grades

Outcome-based education – Approaches to grading, reporting, and promoting

In one alternate grading approach, a student is awarded levels instead of letter grades

Outcome-based education – Approaches to grading, reporting, and promoting

In this approach, students and their parents are better able to track progress from year to year, since the levels are based on criteria that remain constant for a student’s whole time at school

Outcome-based education – Approaches to grading, reporting, and promoting

This emphasis on recognizing positive achievements, and comparing the student to his own prior performance, has been accused by some of dumbing down education (and by others as making school much too hard), since it recognises achievement at different levels. Even those who would not achieve a passing grade in a traditional age-based approach can be recognized for their concrete, positive, individual improvements.

Outcome-based education – Approaches to grading, reporting, and promoting

OBE-oriented teachers think about the individual needs of each student and give opportunities for each student to achieve at a variety of levels

CBS – Promos

using, not the usual television voiceovers, but stars of several CBS shows to promote the upcoming shows, stars such as Ed Sullivan (The Ed Sullivan Show), Rod Serling (The Twilight Zone), and Raymond Burr and Barbara Hale (Perry Mason)

The Office (U.S. TV series) – Promotional

Characters have appeared in promotional materials for NBC, and a licensed video game—The Office (video game)|The Office—was released in 2007

Nanobots (album) – Promotion

Before the release of the full album, two tracks from the album were released digitally

Nanobots (album) – Promotion

They Might Be Giants are currently touring in support of Nanobots. The tour includes shows in North America and Australia. Through their online mailing list, the band has also indicated that they will be playing shows in the United Kingdom and Germany during the tour.

Bubble fusion – Doubts prompt investigation

Doubts among Purdue University’s Nuclear Engineering faculty as to whether the positive results reported from sonofusion experiments conducted there were truthful prompted the university to initiate a review of the research, conducted by Purdue’s Office of the Vice President for Research. In a March 9, 2006 article entitled Evidence for bubble fusion called into question, Nature interviewed several of Taleyarkhan’s colleagues who suspected something was amiss.

Bubble fusion – Doubts prompt investigation

On February 7, 2007, the Purdue University administration determined that the evidence does not support the allegations of research misconduct and that no further investigation of the allegations is warranted

Bubble fusion – Doubts prompt investigation

In June 2008, a multi-institutional team including Taleyarkhan published a paper in Nuclear Engineering and Design to clear up misconceptions generated by a webposting of UCLA which served as the basis for the Nature article of March 2006, according to a press release.

Bubble fusion – Doubts prompt investigation

On July 18, 2008, Purdue University announced that a committee with members from five institutions had investigated 12 allegations of research misconduct against Rusi Taleyarkhan

Bubble fusion – Doubts prompt investigation

Taleyarkhan’s appeal of the report’s conclusions was rejected

Bubble fusion – Doubts prompt investigation

On August 27, 2008 he was stripped of his named Arden Bement Jr. Professorship, and forbidden to be a thesis advisor for graduate students for at least the next 3 years.

Bubble fusion – Doubts prompt investigation

Despite the findings against him, Taleyarkhan received a $185,000 grant from the National Science Foundation between September 2008 and August 2009 to investigate bubble fusion. In 2009 the Office of Naval Research debarred him for 28 months, until September 2011, from receiving U.S. Federal Funding. During that period his name was listed in the ‘Excluded Parties List’ to prevent him from receiving further grants from any government agency.

Water resources – Shared water resources can promote collaboration

The institutions created by these agreements can, in fact, be one of the most important factors in ensuring cooperation rather than conflict.[http://www.iwmi.cgiar.org/Publications/Success_Stories/index.aspx Promoting cooperation through management of trans-boundary water resources], Success Stories, Issue 8, 2010, IWMI

Water resources – Shared water resources can promote collaboration

One chapter covers the functions of trans-boundary institutions and how they can be designed to promote cooperation, overcome initial disputes and find ways of coping with the uncertainty created by climate change

The Social Network – Promotion

The first theatrical poster was released on June 18, 2010

Neuropathology – Prominent historical and current figures in neuropathology

Santiago Ramon y Cajal is considered one of the founders of modern neuroanatomy. Alois Alzheimer, the person after whom Alzheimer’s disease is named, is considered an important early contributor to the field.

Neuropathology – Prominent historical and current figures in neuropathology

There are many neuropathologists around the world who have made important clinical and research contributions toward our understanding of diseases that specifically affect the brain (degenerative diseases, multiple sclerosis, stroke, brain tumors, trauma and neuromuscular diseases)

Food and Drug Administration (United States) – Advertising and promotion

The drug advertising regulation21 CFR 202: Prescription Drug Advertising. contains two broad requirements: (1) a company may advertise or promote a drug only for the specific indication or medical use for which it was approved by FDA. Also, an advertisement must contain a fair balance between the benefits and the risks (side effects) of a drug.

Bus – Promotion

The bus is sometimes staffed by promotions personnel, giving out free gifts

Liver dialysis – Prometheus

Prometheus was proven to be a safe supportive therapy for patients with liver failure.

Fossil-fuel phase-out – Prominent individuals supporting a coal moratorium

* Albert Gore, Jr.|Al Gore:[http://nobelprize.org/nobel_prizes/peace/laureates/2007/gore-lecture_en.html Nobel Lecture], Oslo, December 10, 2007

Fossil-fuel phase-out – Prominent individuals supporting a coal moratorium

* Banker and financier Tom Sanzillo, currently First Deputy Comptroller for the state of New York, called for a moratorium on new coal plants in the state of Iowa. Citing slow growth in electricity demand and better alternative sources of energy, Sanzillo said, It’s not only good public policy, it’s great economics.[http://www.youtube.com/watch?v=zG0pUjBr8KU Tom Sanzillo statement on YouTube]

Fossil-fuel phase-out – Prominent individuals supporting a coal phase-out

* Eric Schmidt, CEO of Google, called for replacing all fossil fuels with renewable sources of energy in twenty years.[http://www.mercurynews.com/olympics/ci_10419245 Google CEO ERic Schmidt offers energy plan,] San Jose Mercury News, 9/9/08

Catalyst – Inhibitors, poisons and promoters

Substances that reduce the action of catalysts are called Reaction inhibitor|catalyst inhibitors if reversible, and Catalyst poisoning|catalyst poisons if irreversible. Promoters are substances that increase the catalytic activity, even though they are not catalysts by themselves.

Catalyst – Inhibitors, poisons and promoters

Inhibitors are sometimes referred to as negative catalysts since they decrease the reaction rate

Catalyst – Inhibitors, poisons and promoters

The inhibitor may modify selectivity in addition to rate

Catalyst – Inhibitors, poisons and promoters

The inhibitor can produce this effect by e.g

Catalyst – Inhibitors, poisons and promoters

Promoters can cover up surface to prevent production of a mat of coke, or even actively remove such material (e.g. rhenium on platinum in platforming). They can aid the dispersion of the catalytic material or bind to reagents.

EUREKA Prometheus Project

The ‘Eureka (organization)|Eureka PROMETHEUS Project’ (‘PRO’gra’M’me for a ‘E’uropean ‘T’raffic of ‘H’ighest ‘E’fficiency and ‘U’nprecedented ‘S’afety, 1987-1995) was the largest RD project ever in the field of driverless cars. It received in funding from the EUREKA member states, and defined the state of the art of autonomous vehicles. Numerous universities and car manufacturers participated in this Pan-European project.

EUREKA Prometheus Project

PROMETHEUS profited from the participation of Ernst Dickmanns, the 1980s pioneer of driverless cars, and his team at Bundeswehr Universität München, collaborating with Daimler-Benz

EUREKA Prometheus Project

The next culmination point was achieved in 1995, when Dickmanns´ re-engineered autonomous Mercedes-Benz S-Class|S-Class Mercedes-Benz took a 1000 mile trip from Munich in Bavaria to Copenhagen in Denmark and back, using saccadic computer vision and transputers to react in real time

EUREKA Prometheus Project

The achievements of PROMETHEUS were the basis for most subsequent work on driverless cars.

EUREKA Prometheus Project – Participants

* Ernst Dickmanns and team of Bundeswehr University of Munich

EUREKA Prometheus Project – Participants

* PSA Peugeot Citroën|PSA

Education for Sustainable Development – Awards for education programs aimed at promoting sustainability programs such as EfS

The [http://zayedfutureenergyprize.com/en/ Zayed Future Energy Prize], was proud to announce the launch of the new Global High School Prize category in 2012

Education for Sustainable Development – Awards for education programs aimed at promoting sustainability programs such as EfS

The [http://teachamantofish.org.uk/pan-african-awards/en/ Educating Africa Award] for Entrepreneurship in Education

Education for Sustainable Development – Awards for education programs aimed at promoting sustainability programs such as EfS

Awarding educational projects in Africa that are entrepreneurial, self sustainable and creating impact.

Rockefeller University – Prominent alumni

*David Baltimore, recipient of Nobel Prize in Physiology Medicine in 1975 for the discovery of reverse transcriptase. Has served as president of both the Rockefeller University and the California Institute of Technology.

Rockefeller University – Prominent alumni

*Michael Bratman, Durfee Professor of philosophy at Stanford University.

Rockefeller University – Prominent alumni

*Barbara Ehrenreich, social commentator and author of the 2001 book Nickel and Dimed: On (Not) Getting By In America.

Rockefeller University – Prominent alumni

*Jonathan Lear, the John U. Nef Distinguished Service Professor in the Committee on Social Thought and professor of philosophy at the University of Chicago, who specializes in Aristotle and psychoanalysis.

Rockefeller University – Prominent alumni

*Harvey Lodish, professor of biology at the Massachusetts Institute of Technology and Founding Member of the Whitehead Institute for Biomedical Research

Rockefeller University – Prominent alumni

*Manuel Elkin Patarroyo, Colombian pathologist who made the world’s first attempt of synthetic vaccine for malaria. Recipient of Prince of Asturias Award in 1994.

Rockefeller University – Prominent alumni

*Robert Sapolsky, Stanford professor, MacArthur Fellows Program|MacArthur Genius Grant recipient, and writer of numerous books on stress and natural history.

Rockefeller University – Prominent alumni

*Amos Smith, Rhodes-Thompson professor of chemistry at the University of Pennsylvania

Rockefeller University – Prominent alumni

*Richard Wolfenden, professor of chemistry, biochemistry and biophysics at the University of North Carolina at Chapel Hill

Manufactured controversy – Prominent examples

Examples of controversies that have been labeled manufactured controversies:

Manufactured controversy – Prominent examples

*Development of skin cancer from exposure to ultraviolet radiation via sunlight and tanning lamps

Manufactured controversy – Prominent examples

*Armenian Genocide denial|Denial of Armenian Genocide by the government of TurkeyState of Denial

Manufactured controversy – Prominent examples

Turkey Spends Millions to Cover Up Armenian Genocide. Intelligence Report, Summer 2008, Issue Number: 130. [http://www.splcenter.org/get-informed/intelligence-report/browse-all-issues/2008/summer/state-of-denial#]

Manufactured controversy – Prominent examples

*Holocaust denial|Denial of the Holocaust of the Jews during WWII

Manufactured controversy – Prominent examples

*Vaccine controversy|Vaccination controversies, particularly those alleging a causative relationship between the MMR vaccine controversy|MMR vaccine or Thiomersal controversy|thiomersal in the development of autism spectrum disorders.[http://www.skeptic.com/eskeptic/09-06-03/ Vaccines Autism: A Deadly Manufactroversy], Harriet Hall, Skeptic (U.S. magazine)|Skeptic Magazine, Vol. 15, number 2, June 3, 2009

Manufactured controversy – Prominent examples

*The Teach the Controversy efforts of intelligent design supporters

Manufactured controversy – Prominent examples

*The carcinogenicity of hexavalent chromium

Promession

‘Promession’ is a proposed form of burial in which Disposal of human corpses|human remains are disposed by way of freeze drying.

Promession

The concept of promession was developed as an environmentally friendly method of burial by Sweden|Swedish biologist Susanne Wiigh-Mäsak, who derived the name from the Italian language|Italian word for promise (promessa). She founded Promessa OrganicAktiebolag|AB in 1997 to exploit her idea.

Promession

#The body is frozen by immersion in liquid nitrogen to make it brittle

Promession

#The remains are then subjected to a vacuum so that the ice sublimation|sublimes and the powder becomes dry, weighing 50% to 70% less than the original body

Promession

#The dry powder is placed in a Biodegradation|biodegradable casket which is interred in the top Soil horizon|layers of soil, where aerobic bacteria decompose the remains into humus in as little as 12months

Promession – Current status

From 2004, trials have been performed on pigs, and AGA AB|AGA Gas developed a proof-of-concept. However a third party is needed to enter into an agreement with Promessa to order the equipment needed for promession of human cadavers.

Promession – Current status

Some independent attempts to reproduce Promessa’s early results have so far been unsuccessful, which the original innovators claim is dues to a lack of skills in cryogenic freezing and vibration technology.

Promession – Current status

Wiigh-Mäsak had received expressions of interest from more than 60countries, including Vietnam, the United Kingdom, South Africa, the Netherlands, Canada, and the United States. In South Korea, the technology was expressly legalized.

Promession – Public opinion

An opinion poll run by Ny Teknik in Sweden showed support for promession.[http://www.nyteknik.se/nyheter/energi_miljo/miljo/article3636562.ece Metoderna som ersätter kremering – NyTeknik] In a popularity contest among about 70 innovative companies in Sweden, Promessa was judged the most popular.[http://www.framtidslyftet.se/sadd/heta-listan/ Heta listan » Framtidslyftet]

Food production – Prominent Food Companies

* PepsiCo: largest U.S.-based food and beverage company.

Food production – Prominent Food Companies

* [http://www.cswg.com CS Wholesale Grocers]: Lead supply chain company in the food industry today and largest wholesale grocery supply company in the U.S.

Food production – Prominent Food Companies

* Unilever: Anglo-Dutch company that owns many of the world’s consumer product brands in foods and beverages.

Food production – Prominent Food Companies

* Kraft Foods|Kraft: apparently the world’s second largest food company, following its acquisition of Cadbury in 2010.

Food production – Prominent Food Companies

* DuPont and Monsanto Company: leading producers of pesticide, seeds, and other farming products.

Food production – Prominent Food Companies

* Both Archer Daniels Midland and Cargill process grain into animal feed and a diverse group of products. ADM also provides agricultural storage and transportation services, while Cargill operates a finance wing.

Food production – Prominent Food Companies

* Bunge Limited: global soybean exporter and is also involved in food processing, grain trading, and fertilizer.

Food production – Prominent Food Companies

* Brasil Foods|BRF: global meat company, produces frozen foods, dairy products and others.

Food production – Prominent Food Companies

* Dole Food Company: world’s largest fruit company. Chiquita Brands International, another U.S.-based fruit company, is the leading distributor of bananas in the United States.

Food production – Prominent Food Companies

* Sunkist Growers, Incorporated is a U.S.-based grower’s cooperative.

Food production – Prominent Food Companies

* JBS S.A.: world’s largest processor and marketer of chicken, beef, and pork. Smithfield Foods is the world’s largest pork processor and producer.

Food production – Prominent Food Companies

* Sysco Corporation: mainly catering to North America, one of the world’s largest food distributors.

Food production – Prominent Food Companies

* General Mills: world’s sixth biggest food manufacturing company.

Food production – Prominent Food Companies

* Grupo Bimbo: one of the most important baking companies in brand and trademark positioning, sales and production volume around the world.

Bone grafting – Osteopromotion

Osteopromotion involves the enhancement of osteoinduction without the possession of osteoinductive properties. For example, enamel matrix derivative has been shown to enhance the osteoinductive effect of demineralized freeze dried bone allograft (DFDBA), but will not stimulate wikt:de novo|de novo bone growth alone.

Transcription (genetics) – Promoter clearance

After the first bond is synthesized, the RNA polymerase must clear the promoter. During this time there is a tendency to release the RNA transcript and produce truncated transcripts. This is called abortive initiation and is common for both eukaryotes and prokaryotes.

Transcription (genetics) – Promoter clearance

Mechanistically, promoter clearance occurs through a scrunching mechanism, where the energy built up by the RNA transcript scrunching provides the energy needed to move the RNAP complex and clear the promoter

Transcription (genetics) – Promoter clearance

In eukaryotes, after several rounds of 10nt abortive initiation, promoter clearance coincides with the TFIIH’s phosphorylation of serine 5 on the carboxy terminal domain of RNAP II, leading to the recruitment of capping enzyme (CE). The exact mechanism of how CE induces promoter clearance in eukaryotes is not yet known.

Funeral – Promession

Promession is a new method of disposing of the body. Patented by a List of companies of Sweden|Swedish company, a promession is also known as an ecological funeral. Its main purpose is to return the body to soil quickly while minimizing pollution and resource consumption.

Wikipedia talk:WikiProject Philosophy – Promotion of Leonard F. Wheat’s non-mainstream views in several Hegel-related articles

Wheat] ([http://www.prometheusbooks.com/index.php?main_page=product_infoproducts_id=2147 2012])

Wikipedia talk:WikiProject Philosophy – Promotion of Leonard F. Wheat’s non-mainstream views in several Hegel-related articles

At least three editors (including me) have been participating in the discussions above since early November

Wikipedia talk:WikiProject Philosophy – Promotion of Leonard F. Wheat’s non-mainstream views in several Hegel-related articles

:I support the undoing of much or all of Atticusator’s work on articles related to Hegel (and Marx, although I believe that he attempts were less successful there). I found his edits to be contrary to NPOV and attempted to reason with him early on but was not successful. mdash; User:Goethean|goethean 01:58, 5 January 2014 (UTC)

Sylvester Stallone – Tobacco promotion

representing their client, cigarette manufacturer Brown Williamson Corp., to use or place BW products in five of his feature films.[http://legacy.library.ucsf.edu/tid/hlm56b00 Re: agreements between Stallone and Associated Film Promotions] Legacy Tobacco Documents Library In exchange, Stallone was paid a total of $500,000, disbursed as $250,000 up front and $50,000 payable at the inception of production of each participating film

Avatar (2009 film) – Promotions

The first photo of the film was released on , 2009, and Empire (magazine)|Empire magazine released exclusive images from the film in its October issue. Cameron, producer Jon Landau (film producer)|Jon Landau, Zoe Saldana, Stephen Lang (actor)|Stephen Lang, and Sigourney Weaver appeared at a panel, moderated by Tom Rothman, at the 2009 San Diego Comic-Con International|San Diego Comic-Con on . Twenty-five minutes of footage was screened in Dolby 3D.

Avatar (2009 film) – Promotions

Weaver and Cameron appeared at additional panels to promote the film, speaking on the 23rd and 24th respectively. James Cameron announced at the Comic-Con Avatar Panel that will be ‘Avatar Day’. On this day the trailer for the film was released in all theatrical formats. The official game trailer and toy line of the film were also unveiled on this day.

Avatar (2009 film) – Promotions

An extended version in IMAX 3D received overwhelmingly positive reviews. The Hollywood Reporter said that audience expectations were coloured by the [same] establishment skepticism that preceded Titanic and suggested the showing reflected the desire for original storytelling.

Avatar (2009 film) – Promotions

On October 30, to celebrate the opening of the first 3-D cinema in Vietnam, Fox allowed Megastar Cinema to screen exclusive 16 minutes of Avatar to a number of press

Avatar (2009 film) – Promotions

McDonald’s had a promotion mentioned in television commercials in Europe called Avatarize yourself, which encouraged people to go to the website set up by Oddcast (company)|Oddcast, and use a photograph of themselves to change into a Na’vi.

Mitotic – Prometaphase

Note: Prometaphase is sometimes included as part of the end of prophase and early metaphase.

Mitotic – Prometaphase

During early prometaphase, the nuclear membrane disintegrates and microtubules invade the nuclear space. This is called open mitosis, and it occurs in most multicellular organisms. Fungi and some protists, such as algae or trichomonads, undergo a variation called closed mitosis where the spindle forms inside the nucleus, or its microtubules are able to penetrate an intact nuclear membrane, which stays intact.

Mitotic – Prometaphase

In late prometaphase, each chromosome forms two kinetochores at its centromere, one attached at each chromatid

Mitotic – Prometaphase

When the spindle grows to sufficient length, kinetochore microtubules begin searching for kinetochores to attach to. A number of nonkinetochore microtubules find and interact with corresponding nonkinetochore microtubules from the opposite centrosome to form the mitotic spindle.

Mitotic – Prometaphase

In the fishing pole analogy, the kinetochore would be the hook that catches a sister chromatid or fish. The centrosome acts as the reel that draws in the spindle fibers or fishing line. It is also one of the main phases of mitosis because without it cytokinesis would not be able to occur.

Patentable subject matter – Collaborative Services v. Prometheus Laboratories

Supreme Court slip opinion] that a process patent that Prometheus Laboratories had obtained for correlations between blood test results and patient health in determining an appropriate dosage of a specific medication for the patient, is not eligible for a patent because the correlation is a law of nature

Royal Academy of Engineering – Promoting engineering at the heart of society

The Academy organises a number of events and debates[http://www.raeng.org.uk/events/default.htm Academy Events] www.raeng.org.uk. retrieved 2013-10-21. in addition to the production of a quarterly magazine, Ingenia,[http://www.ingenia.org.uk/ Ingenia Online – Home Page ] www.ingenia.org.uk. Retrieved 2013-10-21 to reach a variety of difference audiences and enhance awareness of engineering and how it influences the wider world.

Royal Academy of Engineering – Promoting engineering at the heart of society

The Academy also recognises and celebrates the most talented engineers by awarding prestigious prizes, including the annual MacRobert Award, Britain’s top prize for engineering innovation.[http://www.raeng.org.uk/prizes/default.htm Prizes]. Raeng.org.uk. Retrieved on 2013-08-13.

Royal Academy of Engineering – Promoting engineering at the heart of society

The Academy’s public spaces in its building at 3 Carlton House Terrace have undergone renovation works and were re-opened in spring 2012 to provide a central platform for the UK to become the Forum for engineering engagement, debate, discussion and celebration.[ www.raeng.org.uk/facilities/default.htm Conference Facilities ]www.raeng.org.uk. Retrieved 2013-10-22.

Johnny Mnemonic (film) – Transmedia presence and promotion

Johnny Mnemonic was touted with pride by Sony as a film project of unparalleled corporate synergy. Simultaneous with Sony Pictures’s release of the film, its soundtrack was released by Sony subsidiary Columbia Records, while the corporation’s digital effects division Sony ImageWorks issued a CD-ROM videogame version for DOS, Mac and Windows 3.x.

Johnny Mnemonic (film) – Transmedia presence and promotion

The Johnny Mnemonic videogame, which was developed by Evolutionary Publishing, Inc

Johnny Mnemonic (film) – Transmedia presence and promotion

The film’s website facilitated further cross-promotion by selling Sony Signatures-issued Johnny Mnemonic merchandise such as a hack your own brain t-shirt and Pharmakom coffee cups

EEPROM

‘EEPROM’ (also written ‘E2PROM’ and pronounced e-e-prom, double-e prom, e-squared, or simply e-prom) stands for ‘E’lectrically ‘E’rasable ‘P’rogrammable ‘R’ead-‘O’nly ‘M’emory and is a type of non-volatile memory used in computers and other electronic devices to store small amounts of data that must be saved when power is removed, e.g., calibration tables or device configuration.

EEPROM

Unlike bytes in most other kinds of non-volatile memory, individual bytes in a traditional EEPROM can be independently read, erased, and re-written.

EEPROM

When larger amounts of static data are to be stored (such as in USB flash drives) a specific type of EEPROM such as flash memory is more economical than traditional EEPROM devices. EEPROMs are realized as arrays of floating-gate transistors.

EEPROM

It is for this reason that EEPROMs were used for configuration information, rather than random access memory.

EEPROM – History

In 1978, George Perlegos at Intel developed the Intel 2816, which was built on earlier EPROM technology, but used a thin gate oxide layer so that the chip could erase its own bits without requiring a UV source. Perlegos and others later left Intel to form [http://www.antiquetech.com/companies/seeq_technology.htm Seeq Technology], which used on-device charge pumps to supply the high voltages necessary for programming EEPROMs.

EEPROM – Serial bus devices

Most common serial interface types are Serial Peripheral Interface Bus|SPI, I²C, Microwire, UNI/O, and 1-Wire. These interfaces require between one and four control signals for operation, resulting in a memory device in an eight-pin (or less) package.

EEPROM – Serial bus devices

The serial EEPROM (or ‘SEEPROM’) typically operates in three phases: Opcode|OP-Code Phase, Address Phase and Data Phase. The OP-Code is usually the first 8-bits input to the serial input pin of the EEPROM device (or with most I²C devices, is implicit); followed by 8 to 24 bits of addressing depending on the depth of the device, then data to be read or written.

EEPROM – Serial bus devices

Each EEPROM device typically has its own set of OP-Code instructions to map to different functions. Some of the common operations on Serial Peripheral Interface Bus|SPI EEPROM devices are:

EEPROM – Serial bus devices

Other operations supported by some EEPROM devices are:

EEPROM – Parallel bus devices

Parallel EEPROM devices typically have an 8-bit data bus and an address bus wide enough to cover the complete memory. Most devices have chip select and write protect pins. Some microcontrollers also have integrated parallel EEPROM.

EEPROM – Parallel bus devices

Operation of a parallel EEPROM is simple and fast when compared to serial EEPROM, but these devices are larger due to the higher pin count (28 pins or more) and have been decreasing in popularity in favor of serial EEPROM or Flash.

EEPROM – Other devices

EEPROM memory is used to enable features in other types of products that are not strictly memory products. Products such as real-time clocks, digital potentiometers, digital Silicon bandgap temperature sensor|temperature sensors, among others, may have small amounts of EEPROM to store calibration information or other data that needs to be available in the event of power loss.

EEPROM – Other devices

It was also used on video game cartridges to save game progress and configurations, before the usage of external and internal flash memories.

EEPROM – Failure modes

There are two limitations of stored information; endurance, and data retention.

EEPROM – Failure modes

The manufacturers usually specify the maximum number of rewrites being 1 million or more.www.rohm.com/products/lsi/eeprom/faq.html

EEPROM – Failure modes

During storage, the electrons injected into the floating gate may drift through the insulator, especially at increased temperature, and cause charge loss, reverting the cell into erased state. The manufacturers usually guarantee data retention of 10 years or more.System Integration – From Transistor Design to Large Scale Integrated Circuits

EEPROM – Related types

Flash memory is a later form of EEPROM. In the industry, there is a convention to reserve the term EEPROM to byte-wise erasable memories compared to block-wise erasable flash memories. EEPROM takes more die area than flash memory for the same capacity because each cell usually needs both a read, write and erase transistor, while in flash memory the erase circuits are shared by large blocks of cells (often 512×8).

EEPROM – Related types

Newer non-volatile memory technologies such as Ferroelectric RAM|FeRAM and MRAM are slowly replacing EEPROMs in some applications, but are expected to remain a small fraction of the EEPROM market for the foreseeable future.

EEPROM – Comparison with EPROM and EEPROM/Flash

The difference between EPROM and EEPROM lies in the way that the memory programs and erases. EEPROM can be programmed and erased electrically using field electron emission (more commonly known in the industry as Fowler–Nordheim tunneling).

EEPROM – Comparison with EPROM and EEPROM/Flash

EPROMs can’t be erased electrically, and are programmed via hot carrier injection onto the floating gate. Erase is via an ultraviolet light source, although in practice many EPROMs are encapsulated in plastic that is opaque to UV light, making them one-time programmable.

EEPROM – Comparison with EPROM and EEPROM/Flash

Most NOR Flash memory is a hybrid style—programming is through hot carrier injection and erase is through field electron emission|Fowler–Nordheim tunneling.

EEPROM – EEPROM manufacturers

*National Semiconductor (no longer makes standalone EEproms)

EEPROM – EEPROM manufacturers

*Samsung Electronics

Promoter bashing

The importance of a promoter can be observed by the level of transcription.

Promoter bashing

# Clone the region of DNA thought to act as a promoter

Promoter bashing

# Transform cells of interest with the various promoter:reporter constructs

Promoter bashing

# Measure reporter-gene transcription rates by assaying the reporter gene product

Gene therapy of the human retina – Promoter Sequence

Expression in various retinal cell types can be determined the promoter sequence. In order to restrict expression to a specific cell type, a tissue-specific or cell-type specific promoter can be used.

Gene therapy of the human retina – Promoter Sequence

Other ubiquitous promoters such as the CBA promoter, a fusion of the chicken-actin promoter and CMV immediate-early enhancer, allows stable GFP reporter expression in both RPE and photoreceptor cells after subretinal injections.

Hemoencephalography – Promising research

Most research in HEG has focused on disorders of the prefrontal cortex (PFC), the cortical region directly behind the forehead that controls high level executive functions such as planning, judgment, emotional regulation, inhibition, organization, and cause and effect determination

Food marketing – Promotion

Promoting a food to consumers is done out of store, in store, and on package. Advertisements on television and in magazines are attempts to persuade consumers to think favorably about a product, so that they go to the store to purchase the product. In addition to advertising, promotions can also include Sunday newspaper ads that offer coupons such as cents-off and buy-one-get-one-free offers.

SmartStax – Promotion and branding

The smart portion of Smartstax is an acronym standing for Spectrum, Multiple (modes of action), Acceleron, Reduced (corn refuge acres), and Total (piece of mind).www.genuity.com/Home.aspx#/home Dow has not promoted Smartstax as heavily.

Promoter (biology)

‘ ‘Top’: The gene is essentially turned off. There is no lactose to inhibit the repressor, so the repressor binds to the operator, which obstructs the RNA polymerase from binding to the promoter and making lactase.

Promoter (biology)

‘Bottom’: The gene is turned on. Lactose is inhibiting the repressor, allowing the RNA polymerase to bind with the promoter, and express the genes, which synthesize lactase. Eventually, the lactase will digest all of the lactose, until there is none to bind to the repressor. The repressor will then bind to the operator, stopping the manufacture of lactase.

Promoter (biology)

In genetics, a ‘promoter’ is a region of DNA that initiates transcription (genetics)|transcription of a particular gene. Promoters are located near the genes they transcribe, on the same strand and Upstream and downstream (DNA)|upstream on the DNA (towards the 3′ region of the anti-sense strand, also called template strand and non-coding strand).

Promoter (biology)

Promoters can be about 100–1000 base pairs long.

Promoter (biology) – Overview

These transcription factors have specific Activator (genetics)|activator or repressor sequences of corresponding nucleotides that attach to specific promoters and regulate gene expressions.

Promoter (biology) – Overview

;In bacteria: The promoter is recognized by RNA polymerase and an associated sigma factor, which in turn are often brought to the promoter DNA by an activator protein’s binding to its own DNA binding site nearby.

Promoter (biology) – Overview

;In eukaryotes: The process is more complicated, and at least seven different factors are necessary for the binding of an RNA polymerase II to the promoter.

Promoter (biology) – Overview

Promoters represent critical elements that can work in concert with other regulatory regions (Enhancer (genetics)|enhancers, silencer (DNA)|silencers, boundary elements/Insulator (genetics)|insulators) to direct the level of transcription of a given gene.

Promoter (biology) – Identification of relative location

As promoters are typically immediately adjacent to the gene in question, positions in the promoter are designated relative to the transcription start site|transcriptional start site, where transcription of DNA begins for a particular gene (i.e., positions upstream are negative numbers counting back from -1, for example -100 is a position 100 base pairs upstream).

Promoter (biology) – Relative location in the cell nucleus

In the cell nucleus, it seems that promoters are distributed preferentially at the edge of the chromosomal territories, likely for the co-expression of genes on different chromosomes. Furthermore, in humans, promoters show certain structural features characteristic for each chromosome.

Promoter (biology) – Promoter elements

* Core promoter – the minimal portion of the promoter required to properly initiate transcription

Promoter (biology) – Promoter elements

** Includes Transcription start site|Transcription Start Site (TSS) and elements directly upstream

Promoter (biology) – Promoter elements

*** RNA polymerase II: transcribes genes encoding messenger RNA and certain small nuclear RNAs

Promoter (biology) – Promoter elements

* Proximal promoter – the proximal sequence upstream of the gene that tends to contain primary regulatory elements

Promoter (biology) – Promoter elements

** Approximately 250 base pairs upstream of the start site

Promoter (biology) – Promoter elements

* Distal promoter – the distal sequence upstream of the gene that may contain additional regulatory elements, often with a weaker influence than the proximal promoter

Promoter (biology) – Promoter elements

** Anything further upstream (but not an enhancer or other regulatory region whose influence is positional/orientation independent)

Promoter (biology) – Bacterial promoters

In bacteria, the promoter contains two short sequence elements approximately -10 and -35 nucleotides upstream from the transcription start site.

Promoter (biology) – Bacterial promoters

* The above consensus sequences, while conserved on average, are not found intact in most promoters. On average, only 3 to 4 of the 6 base pairs in each consensus sequence are found in any given promoter. Few natural promoters have been identified to date that possess intact consensus sequences at both the -10 and -35; artificial promoters with complete conservation of the -10 and -35 elements have been found to transcribe at lower frequencies than those with a few mismatches with the consensus.

Promoter (biology) – Bacterial promoters

* Some promoters contain one or more upstream promoter element (UP element) subsite (consensus sequence 5′-AAAAAARNR-3′ when centered in the -42 region; consensus sequence 5′-AWWWWWTTTTT-3′ when centered in the -52 region; W = A or T; R = A or G; N = any base).

Promoter (biology) – Bacterial promoters

It should be noted that the above promoter sequences are recognized only by RNA polymerase holoenzyme containing sigma-70. RNA polymerase holoenzymes containing other sigma factors recognize different core promoter sequences.

Promoter (biology) – Bacterial promoters

(Note that the optimal spacing between the -35 and -10 sequences is 17 bp.)

For More Information, Visit:

store.theartofservice.com/itil-2011-foundation-complete-certification-kit-fourth-edition-study-guide-ebook-and-online-course.html

store.theartofservice.com/itil-2011-foundation-complete-certification-kit-fourth-edition-study-guide-ebook-and-online-course.html

A Breath Of Fresh CADE Air

Download (PPT, 1.29MB)


store.theartofservice.com/itil-2011-foundation-complete-certification-kit-fourth-edition-study-guide-ebook-and-online-course.html

CADE

Cornell Law School Academics

Doctor of Juridical Science (J.S.D.)

Cornell Law School Academics

Joint program with Samuel Curtis Johnson Graduate School of Management (JD/MBA)

Cornell Law School Academics

Joint program with Cornell School of Industrial & Labor Relations (JD/MILR)

Cornell Law School Academics

Joint program with Cornell Institute for Public Affairs (JD/MPA)

Cornell Law School Academics

Joint program with Cornell College of Architecture, Art, and Planning (JD/MRP)

Cornell Law School Academics

Joint programs in various fields (JD/MA/PhD)

Cornell Law School Academics

The advanced degrees in law, LL.M. and JSD, have been offered at Cornell since 1928. The JD/MBA has three- and four-year tracks, the JD/MILR program is four years, the JD/MPA is four years, and JD/MRP is four years.

Cornell Law School Academics

In addition, Cornell has joint program arrangements with universities abroad to prepare students for international licensure:

Cornell Law School Academics

Joint program with University of Paris (La Sorbonne) (JD/Master en Droit)

Cornell Law School Academics

Joint program with Humboldt University of Berlin (JD/M.LL.P)

Cornell Law School Academics

Joint program with Institut d’Études Politiques de Paris (JD/Master in Global Business Law)

Cornell Law School Academics

The JD/Master en Droit lasts four-years and prepares graduates for admission to the bar in the United States and in France. The JD/M.LL.P is three years and conveys a mastery of German and European law and practices. The JD/Master in Global Business Law lasts three years.

Cornell Law School Academics

Cornell Law School runs two summer institutes overseas, providing Cornell Law students with unique opportunities to engage in rigorous international legal studies

Cornell Law School Academics

In 2006, Cornell Law School announced that it would launch a second summer law institute, the new Workshop in International Business Transactions with Chinese Characteristics in Suzhou, China. In partnership with Bucerius Law School (Germany) and Kenneth Wang School of Law at Soochow University (China), Cornell Law provides students from the United States, Europe, and China with an academic forum in which they can collaborate on an international business problem.

Academic conference

An academic conference or symposium is a conference for researchers (not necessarily academics) to present and discuss their work. Together with academic or scientific journals, conferences provide an important channel for exchange of information between researchers.

Academic conference Overview

Usually a conference will include keynote speakers (often, scholars of some standing, but sometimes individuals from outside academia)

Academic conference Overview

In addition to presentations, conferences also feature panel discussions, round tables on various issues and workshops.

Academic conference Overview

Prospective presenters are usually asked to submit a short abstract of their presentation, which will be reviewed before the presentation is accepted for the meeting. Some disciplines require presenters to submit a paper of about 6–15 pages, which is peer reviewed by members of the program committee or referees chosen by them.

Academic conference Overview

In some disciplines, such as English and other languages, it is common for presenters to read from a prepared script. In other disciplines such as the sciences, presenters usually base their talk around a visual presentation that displays key figures and research results.

Academic conference Overview

A large meeting will usually be called a conference, while a smaller is termed a workshop. They might be single track or multiple track, where the former has only one session at a time, while a multiple track meeting has several parallel sessions with speakers in separate rooms speaking at the same time.

Academic conference Overview

At some conferences, social or entertainment activities such as tours and receptions can be part of the program. Business meetings for learned societies or interest groups can also be part of the conference activities.

Academic conference Overview

The larger the conference, the more likely it is that academic publishing houses may set up displays. Large conferences also may have a career and job search and interview activities.

Academic conference Overview

the themed conference, small conferences organized around a particular topic;

Academic conference Overview

the general conference, a conference with a wider focus, with sessions on a wide variety of topics. These conferences are often organized by regional, national, or international learned societies, and held annually or on some other regular basis.

Academic conference Overview

the professional conference, large conferences not limited to academics but with academically related issues.

Academic conference Overview

Increasing numbers of amplified conferences are being provided which exploit the potential of WiFi networks and mobile devices in order to enable remote participants to contribute to discussions and listen to ideas.

Academic conference Organizing an academic conference

Conferences are usually organized either by a scientific society or by a group of researchers with a common interest. Larger meetings may be handled on behalf of the scientific society by a Professional Conference Organiser or PCO.

Academic conference Organizing an academic conference

The meeting is announced by way of a “Call For Papers” or a Call For Abstracts, which lists the meeting’s topics and tells prospective presenters how to submit their abstracts or papers. Increasingly, submissions take place online using a managed service such as Community of Science or Oxford Abstracts.

Academic conference Professional conference organisers – trade bodies

Meetings Industry Association – UK conference organisers

Academic conference Lists of conferences

Attendconference.com (science and business events )

Academic conference Lists of conferences

EUAgenda (European Professional Event and Academic Conferences)

Academic conference Lists of conferences

Eventseer.net (computer science and linguistics)

Academic conference Lists of conferences

Microbiology conferences (Worldwide microbiology conferences, meetings, symposia, workshops and advanced courses)

Academic conference Lists of conferences

Molecular biology conferences (Worldwide molecular biology conferences, meetings, symposia, workshops and advanced courses)

Academic conference Conference publishing services

Proceedings of Science, an Open Access publishing service, born from the JHEP and organized by SISSA Medialab

Academic conference Conference publishing services

CEUR Workshop Proceedings, a free electronic publication service under the umbrella of RWTH Aachen University and has the ISSN 1613-0073

Academic conference Conference publishing services

Computing Research Repository, a free repository of scientific papers sponsored by ACM, arXiv, NCSTRL, and AAAI

Academia.edu Financial history

In November 2011, Academia.edu raised $4.5 million from Spark Capital and True Ventures. Prior to that, it had raised $2.2 million from Spark Ventures, and a range of angel investors including Mark Shuttleworth, Thomas Lehrman, and Rupert Pennant-Rea.

Academia.edu Open science

Academia.edu is a participant in the open science or open access movements, responding to a perceived need in science for instant distribution of research and the need for a peer-review system that occurs alongside distribution, instead of occurring before it. Accordingly, the company has stated its opposition to the Research Works Act.

Academia.edu Reception

TechCrunch remarked that Academia.edu gives academics a “powerful, efficient way to distribute their research” and that it “will let researchers keep tabs on how many people are reading their articles with specialized analytics tools”, and “also does very well in Google search results.”

Academia.edu Domain name

Academia.edu is not a university or institution for higher learning and so under current standards would not qualify for the EDU top level domain. The domain name “Academia.edu” was registered in 1999, prior to the regulations which required .edu domain names to be held by accredited post-secondary institutions. All .edu domain names registered prior to 2001 were grandfathered in and not made subject to the regulation of being an accredited post-secondary institution.

Academia.edu Other open access repositories

Archives ouvertes University of Geneva

Hypertext Academic conferences

Among the top academic conferences for new research in hypertext is the annual ACM Conference on Hypertext and Hypermedia. Although not exclusively about hypertext, the World Wide Web series of conferences, organized by IW3C2, include many papers of interest. There is a list on the Web with links to all conferences in the series.

Apache Hadoop Industry support of academic clusters

IBM and Google announced an initiative in 2007 to use Hadoop to support university courses in distributed computer programming.

Apache Hadoop Industry support of academic clusters

In 2008 this collaboration, the Academic Cloud Computing Initiative (ACCI), partnered with the National Science Foundation to provide grant funding to academic researchers interested in exploring large-data applications. This resulted in the creation of the Cluster Exploratory (CLuE) program.

Burson-Marsteller 2000s (decade): Current era

Young & Rubicam became a subsidiary of the media group WPP Group PLC in 2000, and Burson-Marsteller became part of WPP

Burson-Marsteller 2000s (decade): Current era

In December 2005, Burson-Marsteller acquired the Indian firm Genesis PR as a wholly owned subsidiary. Following this acquisition India and China became Burson-Marsteller’s second and third largest markets worldwide, based on number of employees. The renamed Genesis Burson-Marsteller was announced as the company’s hub for the South Asian market in 2008. Prior to the acquisition, since 2002, Genesis had been Burson-Marsteller’s exclusive representative in India.

Burson-Marsteller 2000s (decade): Current era

Mark Penn became the CEO of Burson-Marsteller in December 2005, following a period of instability at the firm during which there were three leadership changes in one year

Burson-Marsteller 2000s (decade): Current era

Penn and Burson-Marsteller received negative media attention in 2008 when his work on behalf of the Colombian government (then seeking a free-trade agreement with the US) became a political liability for the presidential campaign of Hillary Clinton, who was opposed to a free-trade pact with Colombia. Penn described the dual role an “error in judgment” following which the Colombian government terminated its client relationship. Clinton later revised her opinion in favor of the free-trade pact.

Burson-Marsteller 2000s (decade): Current era

Penn’s leadership at Burson-Marsteller has been cited by PR Week as a model for the public relations industry, particularly combining public affairs experience with public relations. In April 2011, industry expert Paul Holmes named Burson-Marsteller the US Large Agency of the Year, citing its double-digit growth within the US and record 2010 profits as factors in the award, crediting Penn with improved performance and Burson’s “global recovery”.

Burson-Marsteller 2000s (decade): Current era

Notable clients for Burson-Marsteller in the late 2000s (decade) include Ford Motor Company, which took on the company as crisis management consultants in 2009, and American International Group (AIG), on whose behalf the firm undertook crisis management work in 2008 and 2009

Burson-Marsteller 2000s (decade): Current era

In May 2011, Burson-Marsteller was hired by Facebook to conduct a PR attack on Google. Burson-Marsteller contacted a number of media companies and bloggers in an effort to get them to write unflattering stories about Google. The campaign backfired when one of the bloggers went public by posting the emails he received from Burson-Marsteller on the Internet.

Collaboration Academia

Black Mountain College

Collaboration Academia

Founded in 1933 by John Andrew Rice, Theodore Dreier and other former faculty of Rollins College, Black Mountain was experimental by nature and committed to an interdisciplinary approach, attracting a faculty which included many of America’s leading visual artists, poets, and designers.

Collaboration Academia

Operating in a relatively isolated rural location with little budget, Black Mountain College inculcated an informal and collaborative spirit, and over its lifetime attracted a venerable roster of instructors

Collaboration Academia

Not a haphazardly conceived venture, Black Mountain College was a consciously directed liberal arts school that grew out of the progressive education movement

Collaboration Academia

Dr

Collaboration Academia

This analysis does not take account of the appearance of Learning communities in the United States in the early 1980s. For example, The Evergreen State College, which is widely considered a pioneer in this area, established an intercollegiate learning community in 1984. In 1985, this same college established The Washington Center for Improving the Quality of Undergraduate Education, which focuses on collaborative education approaches, including learning communities as one of its centerpieces.

Politics As an academic discipline

Political science, the study of politics, examines the acquisition and application of power

Politics As an academic discipline

The first academic chair devoted to politics in the United States was the chair of history and political science at Columbia University, first occupied by Prussian émigré Francis Lieber in 1857.

Global governance Academic tool or discipline

In the light of the unclear meaning of the term “global governance” as a concept in international politics, some authors have proposed to define it not in substantive, but in methdological terms

Democracy Popular rule as a façade

The 20th Century Italian thinkers Vilfredo Pareto and Gaetano Mosca (independently) argued that democracy was illusory, and served only to mask the reality of elite rule

Democracy Popular rule as a façade

All political parties in Canada are now cautious about criticism of the high level of immigration, because, as noted by The Globe and Mail, “in the early 1990s, the old Reform Party was branded ‘racist’ for suggesting that immigration levels be lowered from 250,000 to 150,000.” As Professor of Economics Don J

C++Builder Embarcadero C++Builder

C++Builder 2009 was released in August 2008, with the most notable improvements being full Unicode support throughout VCL and RTL, early adoption of the C++0x standard, full ITE (Integrated Translation Environment) support, native Ribbon components and the inclusion of the Boost library. C++Builder 2010 then followed in August 2009, adding in particular the touch and gesture support newly introduced to the VCL and a C++ specific class explorer. C++Builder XE was released in August 2010.

C++Builder Embarcadero C++Builder

Embarcadero moved to a different versioning scheme in 2010. Rather than use number they use XE. “C++ Builder XE” was released in August 2010, “C++Builder XE2” was released in August 2011, “C++ Builder XE3” was released in August 2012. No notable major changes were included in those three years except for bug fixes and the includision of ‘FireMonkey’ for creating cross platform GUIs.

C++Builder Embarcadero C++Builder

In April 2013, “C++ Builder XE4” was released, which included a 64 bit Windows compiler based on CLANG 3.1. The 32 bit compiler is still based on Embarcadero’s older technology.

Bruce Perens Academia

Perens is finishing a three-year grant from the Competence Fund of Southern Norway

Public administration Academic field

Formally, official academic distinctions were made in the 1910s and 1890s, respectively.

Public administration Academic field

The goals of the field of public administration are related to the democratic values of improving equality, justice, security, efficiency, effectiveness of public services usually in a non-profit, non-taxable venue; business administration, on the other hand, is primarily concerned with taxable profit

Public administration Academic field

Some theorists advocate a bright line differentiation of the professional field from related academic disciplines like political science and sociology; it remains interdisciplinary in nature.

Public administration Academic field

One public administration scholar, Donald Kettl, argues that “…public administration sits in a disciplinary backwater”, because “…[f]or the last generation, scholars have sought to save or replace it with fields of study like implementation, public management, and formal bureaucratic theory”

Public administration Academic field

Public administration theory is the domain in which discussions of the meaning and purpose of government, the role of bureaucracy in supporting democratic governments, budgets, governance, and public affairs takes place

James Madison University Academics

James Madison University is considered “More Selective” by the Carnegie Foundation for the Advancement of Teaching. For the Class of 2012, the university received more than 22,648 applications, for an entering freshmen class of 4,325 for the 2012-2013 academic year. The retention rate for the 2011-2012 freshman class was 91.4%, and the ratio of female to male students is 60/40. Approximately 28% of all students are from out-of-state, representing all 50 states and 89 foreign countries.

James Madison University Academics

Total enrollment beginning the Fall 2012 academic year was 19,927; 18,392 undergraduates and 1,820 graduate students

James Madison University Academics

It is the third academic society in the United States to be organized around recognizing academic excellence, and is the oldest all-discipline honor society.

Arena (software) Academic software editions

Academic Lab Package – Academic version of the commercially available Enterprise Suite. This is 30-or more seat license is for academic, non-commercial usage. Universities that adopt the Simulation with Arena textbook are eligible for valuable offers and benefits.

Arena (software) Academic software editions

Research Edition – This is the same edition as the Academic Lab Package, with this version for individual academic researchers. The same academic guidelines are specified for observance.

Arena (software) Academic software editions

Student Edition – Free edition intended for students currently learning the software is included for download and/or included with many simulation textbooks. This version is perpetual, but limited in model size. This version is intended for academic, non-commercial usage. Universities that are using the software are eligible to make copies of the software to distribute to students for installation on their personal machines.

Arizona State University Academic programs

List of colleges and schools of Arizona State University

Arizona State University Academic programs

ASU offers over 250 majors to undergraduate students, and more than 100 graduate programs leading to numerous masters and doctoral degrees in the liberal arts and sciences, design and arts, engineering, journalism, education, business, law, nursing, public policy, technology, and sustainability

Bertrand Meyer Education and academic career

Bertrand Meyer received the equivalent of a bachelor’s degree in engineering from the École polytechnique in Paris, a master’s degree from Stanford University, and a PhD from the Université de Nancy in Nancy, Meurthe-et-Moselle. He had a technical and managerial career for nine years at Électricité de France, and for three years was on the faculty at the University of California, Santa Barbara.

Bertrand Meyer Education and academic career

Since October 2001, he has been Professor of Software Engineering at ETH Zürich, the Swiss Federal Institute of Technology, where he pursues research on building trusted components (reusable software elements) with a guaranteed level of quality.

Bertrand Meyer Education and academic career

His other activities include being adjunct professor at Monash University in Melbourne, Australia (1998–2003) and membership of the French Academy of Technologies

David Parnas Stance on academic evaluation methods

On his November 2007 paper Stop the Numbers Game, he elaborates on several reasons on why the current number-based academic evaluation system used in many fields by universities all over the world (be it either oriented to the amount of publications or the amount of each of those get) is flawed and, instead of generating more advance of the sciences, it leads to knowledge stagnation.

Kent State University Academic divisions

Architecture and Environmental Design

Kent State University Academic divisions

Arts (focusing on fine/performing arts and fashion-related studies)

Kent State University Academic divisions

Education, Health, and Human Services

Kent State University Academic divisions

The university has an Honors College and interdisciplinary programs in Biomedical Sciences, Financial Engineering, and Information Architecture and Knowledge Management.

Boston University Academics

Boston University offers bachelor’s degrees, master’s degrees, and doctorates, and medical, dental, and law degrees through its 18 schools and colleges. Each school and college at the university has a three letter abbreviation, which is commonly used in place of their full school or college name. For example, the College of Arts and Sciences is commonly referred to as CAS, the School of Management is SMG, the School of Education is SED, etc.

Boston University Boston University Academy

Boston University Academy is a private High School operated by Boston University. Founded in 1993, the school sits within the university’s campus and students are offered the opportunity to take university courses.

Franklin University Academics

Franklin University offers degrees at the associate, bachelor, and master’s levels, including joint BS/MS programs. Many of these programs can be completed entirely online.

Franklin University Academia

Youssef Betshahbazadeh, Professor of Mathematics at Del Mar College

Franklin University Academia

David Darst, Associate Professor of Accounting at Central Ohio Technical College

Franklin University Academia

Kevin Dudley ’89, Affiliated Professor of African-American Studies at Trinity Lutheran Seminary

Franklin University Academia

H. Macy Favor, Jr. ’81, Adjunct Professor at Capital University Law School

Franklin University Academia

Lisa Ghiloni, Assistant Professor and Assistant Dean at Chamberlain College of Nursing

Franklin University Academia

Larry T. Hunter ’01, Directory of Institutional Research at Ohio Dominican University

Franklin University Academia

Andrea M. Karkowski, Professor of Psychology at Capital University

Franklin University Academia

Judith Kimchi-Woods, Campus Dean at Chamberlain College of Nursing

Franklin University Academia

Kelly Phillips, Assistant Professor at University of Toledo

Franklin University Academia

Carl E. Priode, Associate Professor of Electromechanical Engineering Technology at Shawnee State University

Franklin University Academia

Michael Rose, Professor of Psychology at Georgia College and State University

Franklin University Academia

Tonya Sapp ’04 ’06, Capital University Law School Fellow

Franklin University Academia

Lacey Shepard, Surgical Technology Program Director at John A. Logan College

Franklin University Academia

Ronald L. Snyder, Associate Professor of Business at Southern Wesleyan University

Franklin University Academia

Michael Southworth, Adjunct Professor at Case Western Reserve University

Franklin University Academia

Randy Storms, Associate Professor of Workforce and Community Development at North Central State College

Franklin University Academia

Kevin M. Sullivan, Associate Research Professor of Epidemiology at Emory University

Franklin University Academia

Jacqueline E. Wyatt ’71, Professor of Computer Information Systems at Middle Tennessee State University

Franklin University Academia

Robert L. Armbrust ’74, Director of Academic Affairs, Kansas City Campuses University of Phoenix

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Below is a listing of known academic programs that offer Bachelor’s degrees (B.S. or B.S.E.) in what ABET terms “Agricultural Engineering”, “Biosystems Engineering”, “Biological Engineering”, or similarly named programs. ABET accredits college and university programs in the disciplines of applied science, computing, engineering, and engineering technology.

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Institution Department Web site Email

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Auburn University Biosystems Engineering www.eng.auburn.edu/ taylost at auburn.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

California Polytechnic State University – San Luis Obispo BioResource and Agricultural Engineering www.brae.calpoly.edu/ ksolomon at calpoly.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Cornell University Biological and Environmental Engineering bee.cornell.edu/ baa7 at cornell.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Dalhousie University Department of Engineering (Agricultural Campus) www.dal.ca Cathy.Wood at Dal.Ca

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Iowa State University Agricultural and Biosystems Engineering www.abe.iastate.edu/ estaben at iastate.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Kansas State University Biological and Agricultural Engineering www.bae.ksu.edu/home contact-l at bae.ksu.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Louisiana State University Biological and Agricultural Engineering www.bae.lsu.edu DConstant at agcenter.lsu.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

McGill University Department of Bioresource Engineering www.mcgill.ca/bioeng/ valerie.orsat at mcgill.ca

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Michigan State University Biosystems and Agricultural Engineering www.egr.msu.edu/bae/ srivasta at egr.msu.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Mississippi State University Agricultural and Biological Engineering www.abe.msstate.edu jpote at mafes.msstate.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

North Carolina State University Biological and Agricultural Engineering www.bae.ncsu.edu/ robert_evans at ncsu.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

North Carolina Agricultural & Technical State University Chemical, Biological and Bioengineering Department www.ncat.edu/ sbknisle at ncat.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

North Dakota State University Agricultural and Biosystems Engineering www.ndsu.edu/aben/ aben at ndsu.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Ohio State University Food, Agricultural and Biological Engineering fabe.osu.edu shearer.95 at osu.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Oklahoma State University Biosystems and Agricultural Engineering biosystems.okstate.edu daniel.thomas at okstate.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Oregon State University Biological & Ecological Engineering Department bee.oregonstate.edu/ john.bolte at oregonstate.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Penn State University Agricultural & Biological Engineering abe.psu.edu/ hzh at psu.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Purdue University Agricultural and Biological Engineering www.purdue.edu/abe engelb at purdue.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

South Dakota State University Agricultural and Biosystems Engineering www.sdstate.edu/abe/ Van.Kelley at sdstate.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Texas A&M University Biological and Agricultural Engineering baen.tamu.edu/ info at baen.tamu.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of Arizona Agricultural and Biosystems Engineering cals.arizona.edu/abe slackd at u.arizona.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of Arkansas Biological and Agricultural Engineering www.baeg.uark.edu/ lverma at uark.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of California, Davis Biological and Agricultural Engineering bae.engineering.ucdavis.edu/ rhpiedrahita at ucdavis.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of Florida Agricultural and Biological Engineering www.abe.ufl.edu/ dhaman at ufl.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of Georgia Agricultural Engineering / Biological Engineering www.engr.uga.edu donleo at engr.uga.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of Illinois Agricultural and Biological Engineering abe.illinois.edu/ kcting at illinois.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of Kentucky Biosystems and Agricultural Engineering jokko.bae.uky.edu/BAEHome.asp sue.nokes at uky.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of Nebraska-Lincoln Biological Systems Engineering bse.unl.edu mriley3 at unl.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of Manitoba Biosystems Engineering umanitoba.ca/ headbio at cc.umanitoba.ca

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of Minnesota Bioproducts and Biosystems Engineering www.bbe.umn.edu/ nieber at umn.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of Missouri Biological Engineering bioengineering.missouri.edu/ TanJ at missouri.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of Saskatchewan Biosystems Engineering and Soil Science www.engr.usask.ca Pat.Hunchak at usask.ca

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of Tennessee Biosystems Engineering & Soil Science bioengr.ag.utk.edu edrumm at utk.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of Wisconsin Biological Systems Engineering bse.wisc.edu/ rjstraub at wisc.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Utah State University Biological Engineering be.usu.edu ron.sims at usu.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Virginia Polytechnic University Biological Systems Engineering www.bse.vt.edu mlwolfe at vt.edu

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Central and South America

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Universidad Autónoma Chapingo, Mexico Irrigation Department portal.chapingo.mx/irrigacion/

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

National University of Colombia Agricultural Engineering www.ing.unal.edu.co/

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Surcolombiana University, Colombia Agricultural Engineering www.usco.edu.co/pagina/inicio/

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of Campinas, Brazil Agricultural Engineering www.unicamp.br/unicamp/

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of São Paulo, Brazil Biosystems Engineering www.en.esalq.usp.br/

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Federal University of Pelotas, Brazil Agricultural Engineering

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Ataturk University, Turkey Agricultural Structures and Irrigation www.atauni.edu.tr

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Harran University, Turkey Agricultural Structures and Irrigation ziraat.harran.edu.tr

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Leibniz University Hannover, Germany Biosystems and Horticultural Engineering, www.bgt-hannover.de/

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

K.U. Leuven, Belgium Department of Biosystems www.kuleuven.be/english

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University College Dublin, Ireland Biosystems Engineering www.ucd.ie/eacollege/biosystems/

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

University of Hohenheim, Germany Institute of Agricultural Engineering www.uni-hohenheim.de/

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

China Agricultural University Agricultural Engineering english.cau.edu.cn/col/col5470/index.html

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Northwest Agriculture and Forestry University, China Agricultural Soil and Water Engineering en.nwsuaf.edu.cn/

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Shanghai Jiatong University, China Biological Engineering; Food Science and Engineering en.sjtu.edu.cn/academics/undergraduate-programs/

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Xi’an Jiaotong University, China Energy and Power Engineering nd.xjtu.edu.cn/web/English.htm

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Yunnan Agricultural University, China Agricultural Water-Soil Engineering www.at0086.com/YUNNAU

Agricultural engineering – Academic programs in Agricultural and Biosystems Engineering

Zhejiang University, China Biosystems Engineering and Food Science www.caefs.zju.edu.cn/en/index.asp

Arcade system board

An arcade system board is a dedicated computer system created for the purpose of running video arcade games. Arcade system boards typically consist of a main system board with any number of supporting boards.

Arcade system board – Jaleco

Jaleco Tetris Plus 2 (1997-2000)

Arcade system board – Konami

Konami Dual 68000 Based Hardware (1986-1988)

Arcade system board – Konami

Konami Chequered Flag Based Hardware (1987-1992)

Arcade system board – Namco

Namco System Super 23 Evolution 2 (1999)

Arcade system board – Nintendo

Unnamed Wii-based arcade board (2008)

Arcade system board – Nintendo

Unnamed Nintendo Wii U-based arcade board (2013)

Arcade system board – Taito

Taito Nunchacken Hardware (1985)

Arcade system board – Taito

Taito Super Qix Hardware (1986-1987)

Arcade system board – Taito

Taito Kick and Run Hardware (1986)

Arcade system board – Taito

Taito Bubble Bobble Hardware (1986)

Arcade system board – Taito

Taito Darius 2 Twin Screen Hardware (1989-1991)

Arcade system board – Taito

Taito Bonze Adventure Hardware (1988-1994)

Spam (electronic) – Academic Search

Researchers from the University of California, Berkeley and OvGU demonstrated that most (web-based) academic search engines, especially Google Scholar, are not capable of identifying spam attacks

Time 100 – Academic research

The Time 100 has been cited in an academic analysis by Craig Garthwaite and Tim Moore, economists at the University of Maryland, College Park. In light of Oprah Winfrey holding the record for most appearances on the Time 100, the economists decided to measure if Winfrey was influential enough to decide a U.S. presidential election by examining the impact of her endorsement of Barack Obama for president. The economists wrote the following:

Time 100 – Academic research

Oprah Winfrey is a celebrity of nearly unparalleled influence. She has been named to Time magazine’s list of the 100 most influential people six times—more than any other individual, including the Dalai Lama, Nelson Mandela, Bill Gates, George Clooney and Rupert Murdoch. She was named one of the 100 most influential people of the 20th Century, an honor shared with Albert Einstein, Mohandas Karamchand (Mahatma) Gandhi, and Franklin D. Roosevelt. She was only one of four people who were

Time 100 – Academic research

included on these lists in both the 20th and 21st Century. The others included Mandela, Gates, and Pope John Paul II…

Time 100 – Academic research

The scope of Winfrey’s influence creates a unique opportunity to examine the effect of endorsements on political outcomes.

Time 100 – Academic research

The economists found a statistically significant correlation between the number of Winfrey fans in a geographic region (as estimated by the sales of her magazine and book club selections) and the number of votes Obama received in that region during the race for the 2008 Democratic nomination for president

Dubai School of Government – Academic programs

On December 16, 2009, the school graduated its first cohort of 32 students in an intensive, one-year Master of Public Administration program

Dubai School of Government – Academic programs

DSG offers an Executive Diploma in Public Administration (EDPA) in partnership with the Lee Kuan Yew School of Public Policy at the National University of Singapore..

Dubai School of Government – Academic programs

All DSG academic programs are accredited by the UAE Ministry of Higher Education and Scientific Research.

Renren – Academic Studies

The results have produced several academic publications

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

2. 9/11 (2001) Another inauspicious start to the decade

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

3. Obama- (2008 ) The US President’s name as a ‘root’ word or ‘word stem’

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

4. bailout (2008) The Bank Bailout was but Act One of the crisis

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

6. derivative (2007) Financial instrument or analytical tool that engendered the Meltdown

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

7. google (2007) Founders misspelled actual word ‘googol’

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

9. Chinglish (2005) The Chinese-English Hybrid language growing larger as Chinese influence expands

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

10. tsunami (2004) Southeast Asian Tsunami took 250,000 lives

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

11. H1N1 (2009) More commonly known as Swine Flu

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

12. subprime (2007) Subprime mortgages were another bubble to burst

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

13. dot.com (2000) The Dot.com bubble engendered no lifelines, no bailouts

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

14. Y2K (2000) The Year 2000: all computers would turn to pumpkins at the strike of midnight

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

15. misunderestimate (2002) One of the first and most enduring of Bushisms

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

16. chad (2000) Those Florida voter punched card fragments that the presidency would turn aupon

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

17. twitter (2008) A quarter of a billion references on Google

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

18. WMD (2002) Iraq’s Weapons of Mass Destruction

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

22. sustainable (2006) The key to ‘Green’ living where natural resources are never depleted

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

23. Brokeback (2004) New term for ‘gay’ from the Hollywood film ‘Brokeback Mountain’

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

24. quagmire (2004) Would Iraq War end up like Vietnam, another ‘quagmire’?

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

25. truthiness (2006) Stephen Colbert’s addition to the language appears to be a keeper

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

1. Rise of China The biggest story of the decade, outdistancing the No. 2 Internet story by 400%.

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

2. Iraq War The buildup, the invasion, the hunt for the WMDs, and the Surge were top in print and electronic media outlets.

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

4. War on Terror President George W. Bush’s response to 9/11.

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

5. Death of Michael Jackson A remarkably high ranking considering that MJ’s death occurred in the final year of the decade.

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

6. Election of Obama to US presidency The rallying cries of ‘hope’ and ‘Yes, we can!’ resulting in the historic election of an African-American to the US presidency.

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

7. Global Recession of 2008/9 The on-going world economic restructuring as opposed to the initial ‘economic meltdown’ or ‘financial tsunami’.

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

8. Hurricane Katrina New Orleans was devastated when the levies collapsed; scenes of death and destruction shocked millions the world over.

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

10. Economic Meltdown/Financial Tsunami The initial shock of witnessing some 25% of the world’s wealth melting away seemingly overnight.

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

11. Beijing Olympics The formal launch of China onto the world stage.

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

12. South Asian Tsunami The horror of 230,000 dead or missing, washed away in a matter of minutes was seared into the consciousness the global community.

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

13. War against the Taliban Lands controlled by the Taliban served as a safe haven from which al Qaeda would launch its terrorist attacks.

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

14. Death of Pope John Paul II The largest funeral in recent memory with some 2,000,000 pilgrims in attendance.

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

15. Osama bin-Laden eludes capture Hesitation to attack Tora Bora in 2002 has led to the continuing manhunt.

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

2. Financial Tsunami (2008) One quarter of the world’s wealth vanishes seemingly overnight

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

3. ground zero (2001) Site of 9/11terrorist attack in New York City

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

4. War on Terror (2001) Bush administration’s response to 9/11

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

5. Weapons of Mass Destruction (2003) Bush’s WMDs never found in Iraq or the Syrian desert

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

6. swine flu (2008) H1N1, please, so as not to offend the pork industry or religious sensitivities!

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

7. “Let’s Roll!” (2001) Todd Beamer’s last words before Flight 93 crashed into the PA countryside

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

8. Red State/Blue State (2004) Republican or Democratic control of states

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

9. carbon footprint (2007) How much CO² does an activity produce?

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

12. Category 4 (2005) Force of Hurricane Katrina hitting New Orleans’ seawalls and levies

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

14. “Stay the course” (2004) Dubya’s oft-stated guidance for Iraq War

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

16. “Jai Ho!” (2008) Shout of joy from ‘Slumdog Millionaire’

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

17. “Out of the Mainstream” (2003) Complaint about any opposition’s political platform

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

18. Cloud computing (2007) Using the Internet as a large computational device

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

19. threat fatigue (2004) One too many terrorist threat alerts

Global Language Monitor – Top Words,Stories, Phrases and Names of the Decade

20. same-sex marriage (2003) Marriage of gay couples

Linus Torvalds – Academics

In 1997, Torvalds received his Master degree (Laudatur Grade) from Department of Computer Science, University of Helsinki. Two years later he received honorary doctor status at Stockholm University, and in 2000 he received the same honor from his alma mater.

Linus Torvalds – Academics

In August 2005, Torvalds received the Vollum Award from Reed College.

Google+ – Academic research

Since Google+ was launched, it has attracted attention in academic research. For instance, researchers from UC Berkeley crawled about 80 daily snapshots of Google+ and studied its early evolution. They found Google+’s early evolution process can be roughly divided into three phases. Moreover, they found that user attributes (e.g., school, major, employer, etc.) have significant impact on Google+’s social structure and evolution.

Privacy issues of social networking sites – Academic studies

As technology continues to blossom in our current society, the critical issue of internet user’s privacy and private-information sharing behavior has been thoroughly researched. The global threat of internet privacy violations should expedite the spreading of awareness and regulations of online privacy, especially on social networking sites. Currently, studies have shown that people’s right to the belief in privacy is the most pivotal predicator in their attitudes concerning online privacy.

Molecular nanotechnology – Study and recommendations by the U.S. National Academy of Sciences

National Academy of Sciences released the report of a study of molecular manufacturing as part of a longer report, A Matter of Size: Triennial Review of the National Nanotechnology Initiative The study committee reviewed the technical content of Nanosystems, and in its conclusion states that no current theoretical analysis can be considered definitive regarding several questions of potential system performance, and that optimal paths for implementing high-performance systems cannot be predicted with confidence

Molecular nanotechnology – Study and recommendations by the U.S. National Academy of Sciences

“Although theoretical calculations can be made today, the eventually attainable range of chemical reaction cycles, error rates, speed of operation, and thermodynamic efficiencies of such bottom-up manufacturing systems cannot be reliably predicted at this time

Palomar College – Academic programs

Palomar College offers more than 250 associate’s degrees and certificate programs, and also offers programs for students wishing to transfer to many different four-year universities, including institutions in the University of California and California State University systems.

Palomar College – Academic programs

Arts, Media, Business and Computer Systems

Palomar College – Academic programs

Career, Technical and Extended Education

Palomar College – Academic programs

Emergency Medical Education

Glenn Reynolds – Academic publications

As a law professor, Reynolds has written for the Columbia Law Review, the Virginia Law Review, the University of Pennsylvania Law Review, the Wisconsin Law Review, the Northwestern University Law Review, the Harvard Journal of Law and Technology, Law and Policy in International Business, Jurimetrics, and the High technology law journal, among others.

Quality management – Academic resources

International Journal of Productivity and Quality Management, ISSN 1746-6474, Inderscience

Quality management – Academic resources

International Journal of Quality & Reliability Management, ISSN: 0265-671X, Emerald Publishing Group

History of broadcasting – The 2000s (decade)

The 2000s (decade) saw the introduction of digital radio and direct broadcasting by satellite (DBS) in the USA.

History of broadcasting – The 2000s (decade)

Digital radio services, except in the United States, were allocated a new frequency band in the range of 1,400 MHz

History of broadcasting – The 2000s (decade)

In addition, a consortium of companies received FCC approval for In-Band On-Channel digital broadcasts in the United States, which use the existing mediumwave and FM bands to provide CD-quality sound. However, early IBOC tests showed interference problems with adjacent channels, which has slowed adoption of the system.

History of broadcasting – The 2000s (decade)

In Canada, the Canadian Radio-television and Telecommunications Commission plans to move all Canadian broadcasting to the digital band and close all mediumwave and FM stations.

History of broadcasting – The 2000s (decade)

European and Australian stations have begun digital broadcasting (DAB). Digital radios began to be sold in the United Kingdom in 1998.

History of broadcasting – The 2000s (decade)

Regular Shortwave broadcasts using Digital Radio Mondiale (DRM), a digital broadcasting scheme for short and medium wave broadcasts have begun. This system makes the normally scratchy international broadcasts clear and nearly FM quality, and much lower transmitter power. This is much better to listen to and has more languages.

History of broadcasting – The 2000s (decade)

In Sri Lanka in 2005 when Sri Lanka celebrated 80 years in Broadcasting, the former Director-General of the Sri Lanka Broadcasting Corporation, Eric Fernando called for the station to take full advantage of the digital age – this included looking at the archives of Radio Ceylon.

History of broadcasting – The 2000s (decade)

Ivan Corea asked the President of Sri Lanka, Mahinda Rajapakse to invest in the future of the SLBC.

Cultural studies – Academic reception

Cultural Studies is not a unified theory, but a diverse field of study encompassing many different approaches, methods and academic perspectives. As in any academic discipline, Cultural Studies academics frequently debate among themselves. However, some academics from other fields have criticised the discipline as a whole. It has been popular to dismiss Cultural Studies as an academic fad.

Legal psychology – Academics and research

Many legal psychologists work as professors in university psychology departments, criminal justice departments or law schools

University of Glasgow – Academic Senate

The Academic Senate (or University Senate) is the body which is responsible for the management of academic affairs, and which recommends the conferment of degrees by the Chancellor. Membership of the Senate comprises all Professors of the University, as well as elected academic members, representatives of the Student’s Representative Council, the Secretary of Court and directors of University services (e.g. Library). The President of the Senate is the Principal.

University of Glasgow – Academic Senate

The Clerk of Senate, who has status equivalent to that of a Vice-Principal and is a member of the Senior Management Group, has responsibility for regulation of the University’s academic policy, such as dealing with plagiarism and the conduct of examanitions. Notable Clerks of Senate have included the chemist, Professor Joseph Black; Professor John Anderson, father of the University of Strathclyde; and the economist, Professor John Millar.

Conscientiousness – Academic and workplace performance

Conscientiousness is importantly related to successful academic performance in students and workplace performance among managers and workers

CADES

CADES (Computer Aided Design and Evaluation System) was a software engineering repository system produced to support the development of the VME/B Operating System for the ICL New Range – subsequently 2900 – computers.

CADES

From its earliest days, VME/B was developed with the aid of CADES, which was built for the purpose using an underlying IDMS database (latterly upgraded to IDMS(X)). CADES was not merely a version control system for code modules: it was intended to manage all aspects of the software lifecycle from requirements capture through to field maintenance.

CADES

It was the design of CADES that paved the way for the Alvey Project in IPSE (Integrated Project Support Environments) and Process Control Engines.

CADES

Because CADES was used for more than 20 years throughout the development of a large software engineering project, the data collected has been used as input to a number of studies of software evolution.

CADES – Early History of CADES

CADES was conceived in 1970 by David John Pearson and Brian Warboys when working for ICL’s New Range Operating System Technology Centre, OSTECH, in Kidsgrove. Pearson, a theoretical physicist by training, had become a computer simulation specialist and joined ICL in 1968 after working in finite-element modelling and simulation research at Imperial College. Warboys had been chief architect for the ICL System 4 multi-access operating system, Multijob.

CADES – Early History of CADES

ICL’s commitment to large scale software development for the 2900 Series of computers provided the basis for the Pearson and Warboys early work on a new software development environment which would address the issues of designer/programmer productivity, design integrity, evaluation and testing, version control and systems regression.

CADES – Early History of CADES

Design specifications written in SDL were processed by the Design Analyser, before being input to the CADES Product Database, a design and implementation database supporting its own query language and forming the kernel of the Product Information System.

CADES – Early History of CADES

The intention was that these designs could be evaluated/simulated using the Animator, and S3 implementation code automatically generated from them using the Environment Processor. Build generation and version control was also based on the Product Database, resulting in a highly disciplined approach to new system builds. System Regression was therefore controlled from a very early stage in the software life-cycle.

CADES – Fundamentals of CADES

In trying to control all the concurrent the developments of VME/B, each development was sub-divided for easier management. This is analogous to a book, where chapters represent significant components within VME (kernel, file store, etc.). Within each chapter the paragraphs then represented sub-systems within. Development activity of each sub-system created specific versions to manage.

CADES – Fundamentals of CADES

This, coupled with a suite of tools, and the use of SDL as the development language, version history and the concept of trusted source code (that is code that has passed QA and subsequently resides within CADES filestore) improved development time whilst providing satisfactory audit trails and QA processes.

CADES – Fundamentals of CADES

In a similar fashion CADES also retained information with regard to constant values (aka literals), user-defined types and user-defined structures.

CADES – Development using CADES

Development under CADES was achieved used a suite of tools known as MODPRO (Module Processing) which acted as an interface (or broker) between developer and CADES. These tools enabled the developer to focus more on development that administrative, QA or SCM tasks. It was not necessary to know how to manipulate data within CADES, the application generated the required DNL (Data Navigation Language) to achieve the required results.

CADES – Development using CADES

Then, again with information from CADES, when used with MODPRO tool EPETC (aka Environmental Processor or EP etc.) enabled the resultant file to be correctly targeted for S3 or SCL compilation

CADES – Development using CADES

• Detailed Holon information using CHED (CADES Holon Environment Details),

CADES – Development using CADES

• Interaction with CADES using DIL (Database Interface Language, used to produce DNL),

CADES – Development using CADES

• Report production, using CRP (CADES Report Producer),

CADES – Development using CADES

• Transfer valid files/code in to or extract out of the secure repository, namely CADES, using XFER.

CADES – Development using CADES

The following illustrates the typical MODPRO development route.

CADES – Further reading

• David Pearson and Brian Warboys “Structural Modelling – A Philosophy” OSTC/IN/40, 31 July 1970

CADES – Further reading

• David Pearson “CADES – Computer-aided development and evaluation system” Computer Weekly, 1973

CADES – Further reading

• David Pearson “The use and abuse of a software engineering system” National Computer Conference, 1979

CADES – Further reading

• B.C. Warboys (25 January 1988). “Extrapolation of lessons from CADES to the present day”. IEE Colloquium on Industrial Impact of Software Engineering: 3.

CADES – Further reading

• R. W. McGuffin, A.E. Elliston, B.R. Tranter, P.N. Westmacott (September 1979). “CADES – software engineering in practice”. IEEE Proceedings 4th International Conference on Software Engineering, Munich, Germany.

CADES – Further reading

• B. Kitchenham (May 1982). “System Evolution Dynamics of VME/B”. ICL Technical Journal: 42–57.

CADES – Further reading

• B. W. Chatters, M. M. Lehman, J. F. Ramil, P. Wernick (2000). “Modelling a software evolution process: a long-term case study”. Software Process: Improvement and Practice 5 (2-3): 91–102. doi:10.1002/1099-1670(200006/09)5:2/3<91::AID-SPIP123>3.0.CO;2-L.

CADES – Further reading

• R.A Snowden (May 1990). “An Introduction To The IPSE 2.5 Project”. ICL Technical Journal 6 (3).

University of California, Berkeley – Academics

The university operates on a semester academic calendar with Fall semester running from late August through early December and Spring semester running from mid-January through mid-May

Facade engineering

Building facades make a major contribution to the overall aesthetic and technical performance of a building. Specialist facade engineers operate within technical divisions of facade manufacturing companies, while some structural engineers act as facade consultants for architects, building owners, cladding manufacturers and construction managers. Projects can include new buildings and recladding of existing buildings.

Facade engineering

The facade engineer must consider the performance of the facade design with regards to air-tightness, thermal performance (Heat losses and solar gains), daylight penetration, Acoustic performance, and Fire resistance as well as the overall desired aesthetic for the facade.

Facade engineering

Facade engineering requires a blend of skills ranging from structural engineering through to building physics, architecture, manufacturing, materials science, dynamics, programming, procurement, project management and construction techniques.

Facade engineering

The professional body that looks after the development of the industry is the Society of Facade Engineering.

Leadership studies – Academic Journals

The International Journal of Leadership Studies: Representing the multidisciplinary field of leadership, the IJLS publishes theoretically grounded research that enhances knowledge and understanding of the phenomenon of leadership at all levels within a variety of industries and organizations and seeks contributions that present leadership from different perspectives unique to different cultures, settings, and religions around the world.

Leadership studies – Academic Journals

The International Journal of Servant Leadership: The International Journal of Servant-Leadership is published by Gonzaga University in collaboration with the Larry Spears Center for Servant-leadership.

Leadership studies – Academic Journals

The Journal of Leadership and Organizational Studies: The Journal of Leadership and Organizational Studies is the Official Journal of the Midwest Academy of Management

Leadership studies – Academic Journals

Journal of Leadership Studies: The mission of the Journal of Leadership Studies is to publish leadership research and theoretical contributions that bridge the gap between scholarship and practice and that exemplify critical inquiry into contemporary organizational issues and paradigms

Leadership studies – Academic Journals

Leadership: Leadership is an international, peer-reviewed journal designed to provide an ongoing forum for academic researchers to exchange information, insights and knowledge based on both theoretical development and empirical research on leadership. It will publish original, high quality articles that contribute to the advancement of the study of leadership. The journal will be global in orientation and focus.

Leadership studies – Academic Journals

The Leadership Quarterly: Is an international journal of political, social and behavioral science published in affiliation with the International Leadership Association (ILA).

Leadership studies – Academic Journals

Leadership and Organization Development Journal: The Leadership & Organization Development Journal explores behavioral and managerial issues relating to all aspects of leadership, and of individual and organization development, from a global perspective.

Leadership studies – Academic Journals

Journal of Leadership Education: An international, refereed journal that serves scholars and professional practitioners engaged in leadership education.

Leadership studies – Academic programs

The following is a list of doctoral, masters, and undergraduate degree programs related to the study of leadership. With some notable exceptions (particularly in regard to the list of doctoral programs), this list does not include programs related to specific sub-areas of leadership (e.g., educational leadership, health care leadership, environmental leadership). The programs listed primarily focus on leadership, leadership studies, and organizational leadership.

Leadership studies – Academic programs

Given that the study of leadership is interdisciplinary, leadership-related degree programs are often situated within various colleges, schools, and departments across different university campuses (e.g., Schools of Education at some universities, Business Schools at other universities, and Graduate and Professional Schools at still other universities)

Swarthmore College – Academic program

Swarthmore’s Oxbridge tutorial-inspired Honors Program allows students to take double-credit seminars from their junior year and often write honors theses

Swarthmore College – Academic program

Unusual for a liberal arts college, Swarthmore has an engineering program; at the end of four years, students are granted a B.S. in Engineering. Other notable programs include minors in peace and conflict studies, cognitive science, and interpretation theory.

Swarthmore College – Academic program

Swarthmore has a reputation as a very academically-oriented college, with 90% of graduates eventually attending graduate or professional school

Swarthmore College – Academic program

Swarthmore is a member of the Tri-College Consortium (or TriCo) with nearby Bryn Mawr College and Haverford College, which allows students from any of the three to cross-register for courses at any of the others. The consortium as a whole is additionally affiliated with the University of Pennsylvania and students are able to cross-register for courses there as well.

Swarthmore College – Academic program

Current students go so far as to sport Swarthmore t-shirts proclaiming, “Anywhere else it would’ve been an A.” Some have pointed out that statistics suggesting grade inflation over the past decades may be exaggerated by reporting practices and the fact that grades were not given in the Honors program until 1996

Swarthmore College – Academic program

Since the 1970s, Swarthmore students have won 30 Rhodes Scholarships, 8 Marshall Scholarships, 151 Fulbright Scholarships, 22 Truman Scholarships, 13 Luce Scholarships, 67 Watson Fellowships, 3 Soros Fellowships, 18 Goldwater Scholarships, 84 Mellon Mays Undergraduate Fellowships, 13 National Endowment for the Humanities Grants for Younger Scholars, 234 National Science Foundation Graduate Fellowships, 35 Woodrow Wilson Fellowships, and 2 Mitchell Scholarships.

Orange (colour) – Academia

In the United States and Canada, orange regalia is associated with the field of engineering.

Ecological engineering – Academic curriculum

An academic curriculum has been proposed for ecological engineering, and key institutions across the US are indeed starting programs. Key elements of this curriculum are:

Ecological engineering – Academic curriculum

quantitative ecology,

Ecological engineering – Academic curriculum

Complementing this set of courses are prerequisites courses in physical, biological, and chemical subject areas, and integrated design experiences. According to Matlock et al., the design must identify constraints, characterize solutions in ecological time, and incorporate ecological economics in design evaluation. Economics of ecological engineering has been demonstrated using energy principles for a wetland., and using nutrient valuation for a dairy farm

Resistor – Resistance decade boxes

A resistance decade box or resistor substitution box is a unit containing resistors of many values, with one or more mechanical switches which allow any one of various discrete resistances offered by the box to be dialed in

Embarcadero Delphi

Website www.embarcadero.com/products/delphi

Embarcadero Delphi

Delphi’s compilers use their own Object Pascal dialect of Pascal and generate native code for 32- and 64-bit Windows Operating Systems, as well as 32-bit Mac OS X, iOS and Android. As of late 2011 support for the Linux Operating System was planned by Embarcadero.

Embarcadero Delphi

Delphi was originally developed by Borland as a rapid application development tool for Windows, and as the successor of Borland Pascal. Delphi and its C++ counterpart, C++Builder, shared many core components, notably the IDE and VCL, but remained separate until the release of RAD Studio 2007. RAD Studio is a shared host for Delphi, C++Builder, and others.

Embarcadero Delphi

In 2006, Borland’s developer tools section were transferred to a wholly owned subsidiary known as CodeGear, which was sold to Embarcadero Technologies in 2008.

Embarcadero Delphi – History

Delphi was originally one of many codenames of a pre-release development tool project at Borland. Borland developer Danny Thorpe suggested the Delphi codename in reference to the Oracle at Delphi. One of the design goals of the product was to provide database connectivity to programmers as a key feature and a popular database package at the time was Oracle database; hence, “If you want to talk to [the] Oracle, go to Delphi”.

Embarcadero Delphi – History

As development continued towards the first release, the Delphi codename gained popularity among the development team and beta testing group. However, the Borland marketing leadership preferred a functional product name over an iconic name and made preparations to release the product under the name “Borland AppBuilder”.

Embarcadero Delphi – History

Shortly before the release of the Borland product, Novell AppBuilder was released, leaving Borland in need of a new product name. After much debate and many market research surveys, the Delphi codename became the Delphi product name.

Embarcadero Delphi – History

The chief architect behind Delphi was Anders Hejlsberg, who had developed Turbo Pascal. He was persuaded to move to Microsoft in 1996.

Embarcadero Delphi – History

On February 8, 2006 Borland announced that it was looking for a buyer for its IDE and database line of products, including Delphi, to concentrate on its ALM line.

Embarcadero Delphi – History

On November 14, 2006 Borland transferred the development tools group to an independent subsidiary company named CodeGear, instead of selling it.

Embarcadero Delphi – History

Borland sold CodeGear to Embarcadero Technologies in 2008. Embarcadero retained the CodeGear division created by Borland to identify its tool and database offerings, but identified its own database tools under the DatabaseGear name.

Embarcadero Delphi – Early Borland Years (1995-2004)

Delphi (later known as Delphi 1) was released in 1995 for the 16-bit Windows 3.1, and was an early example of what came to be known as Rapid Application Development (RAD) tools

Embarcadero Delphi – Early Borland Years (1995-2004)

Delphi 2, released in 1996, supported 32-bit Windows environments. Delphi 1 was bundled with it for creation of 16-bit Windows 3.1 applications. New Quickreport components replacing Borland ReportSmith. It was then later turned into Java.

Embarcadero Delphi – Early Borland Years (1995-2004)

Delphi 3, released in 1997, added new VCL components encapsulating the 4.71 version of Windows Common Controls (such as Rebar and Toolbar), TDataset architecture separated from BDE, DLL debugging, the code insight technology, component packages, component templates, DecisionCube and Teechart components for statistical graphing, WebBroker, ActiveForms, MIDAS three tier architecture, component packages and integration with COM through interfaces.

Embarcadero Delphi – Early Borland Years (1995-2004)

Inprise Delphi 4 was released in 1998. IDE came with a completely overhauled editor and became dockable. VCL added support for ActionLists anchors and constraints. Additional improvements were method overloading, dynamic arrays, Windows 98 support, Java interoperability, high performance database drivers, CORBA development, and Microsoft BackOffice support. It was the last version shipped with Delphi 1 for 16 bit programming.

Embarcadero Delphi – Early Borland Years (1995-2004)

Borland Delphi 5 was released in 1999. Added concept of frames, parallel development, translation capabilities, enhanced integrated debugger, XML support, ADO database support and reference counting interfaces

Embarcadero Delphi – Early Borland Years (1995-2004)

In 2001 Borland released a Linux version of Delphi, named Kylix. The IDE was dependent on the Wine libraries rather than Linux’s native system libraries (glibc) in order to get a product out quickly and relatively cheaply. The expense of developing a native glibc version of Kylix, combined with the lack of Linux adoption among programmers at the time, caused sales to go soft, and Kylix was abandoned after version 3. This was the first attempt to add Linux support in the Delphi product family.

Embarcadero Delphi – Early Borland Years (1995-2004)

Kylix used the new CLX cross-platform framework, instead of Delphi’s VCL.

Embarcadero Delphi – Early Borland Years (1995-2004)

Attempts to support both Linux and Windows for cross-platform development were made, and a cross-platform alternative to the VCL known as CLX shipped in 2001 with the release of Delphi 6. This was the second attempt to add Linux support to the Delphi product family (see Kylix above).

Embarcadero Delphi – Early Borland Years (1995-2004)

Delphi 6 included the same CLX version (CLX 1) as the first version of Kylix. CLX 1 had been created before Delphi 6; its feature set was based on VCL 5 and lacked some features added to the VCL 6 shipped with Delphi 6.

Embarcadero Delphi – Early Borland Years (1995-2004)

Delphi 7, released in August 2002, became the standard version used by more Delphi developers than any other single version. It is one of the most successful IDEs created by Borland because of its stability, speed and low hardware requirements, and remained in active use as of 2011. Delphi 7 added support for Windows XP Themes, and added more support for building Web applications. It was the last version of Delphi which could be used without mandatory software activation.

Embarcadero Delphi – Early Borland Years (1995-2004)

Delphi 8, released December 2003, was a .NET-only release that compiled Delphi Object Pascal code into .NET CIL; the IDE was rewritten for this purpose

Embarcadero Delphi – Later Borland Years (2004-2008)

The next version, Delphi 2005 (Delphi 9, also Borland Developer Studio 3.0), included the Win32 and .NET development in a single IDE, reiterating Borland’s commitment to Win32 developers

Embarcadero Delphi – Later Borland Years (2004-2008)

In late 2005 Delphi 2006 (Delphi 10, Borland Developer Studio 4.0) was released and combined development of C# and Delphi.NET, Delphi Win32 and C++ (Preview when it was shipped but got stable in Service Pack 1) into a single IDE. It was much more stable than Delphi 8 or Delphi 2005 when shipped, and improved even more with the release of service packs and several hotfixes.

Embarcadero Delphi – Later Borland Years (2004-2008)

Turbo Delphi and C++

Embarcadero Delphi – Later Borland Years (2004-2008)

On September 6, 2006 The Developer Tools Group (the working name of the not yet spun off company) of Borland Software Corporation released single-language versions of Borland Developer Studio components, bringing back the Turbo name

Embarcadero Delphi – Later Borland Years (2004-2008)

Delphi 2007 (Delphi 11), the first version by CodeGear, was released on March 16, 2007

Embarcadero Delphi – Later Borland Years (2004-2008)

Prism is a separate product line with new releases; Embarcadero Delphi Prism XE2 was released at about the same time as Delphi XE2.

Embarcadero Delphi – Embarcadero Years (2008-)

Delphi 2009 (Delphi 12, code named Tiburón), added many new features such as completely reworking the VCL and RTL for full Unicode support, and added generics and anonymous methods for Win32 native development. Support for .NET development was dropped from the mainstream Delphi IDE starting with this version, and was catered for by the new Delphi Prism.

Embarcadero Delphi – Embarcadero Years (2008-)

Delphi 2010 (code-named Weaver, aka Delphi 14; there was no version 13), was released on August 25, 2009 and is the second Unicode release of Delphi. It includes a new compiler run-time type information (RTTI) system, support for Windows 7 direct 2D, touch screen and gestures, a source code formatter, debugger visualizers and the option to also have the old style component palette in the IDE. The new RTTI system makes larger executables than previous versions.

Embarcadero Delphi – Embarcadero Years (2008-)

Delphi XE (aka Delphi 2011, code named Fulcrum), was released on August 30, 2010.With Delphi support for Amazon EC2, Microsoft Azure were bundled.

Embarcadero Delphi – Embarcadero Years (2008-)

On January 27, 2011 Embarcadero announced the availability of a new Starter Edition which gives independent developers, students and micro businesses a slightly reduced feature set for a price less than a quarter of that of the next-cheapest version.

Embarcadero Delphi – Embarcadero Years (2008-)

On September 1, 2011 Embarcadero released RAD Studio XE2 (code-named Pulsar) which included Delphi XE2, C++Builder, Prism XE2 and RadPHP XE2.

Embarcadero Delphi – Embarcadero Years (2008-)

Delphi XE2 natively supports 64-bit Windows (except the starter edition), in addition to the long-supported 32-bit versions, with some backwards compatibility. Applications for 64-bit platforms can be compiled, but not tested or run, on the 32-bit platform. The XE2 IDE cannot debug 64-bit programs on Windows 8 and above.

Embarcadero Delphi – Embarcadero Years (2008-)

Embarcadero says that Linux Operating System support “is being considered for the roadmap”, as is Android, and that they are “committed to ..

Embarcadero Delphi – Embarcadero Years (2008-)

Unfortunately, iOS platforms development works only with XCode 4.2.1 and lower, OSX version 10.7 and lower, and iOS SDK 4.3 and earlier. This limitation will be removed in 2013 release of Delphi (and RAD Studio), which will support iOS development natively.

Embarcadero Delphi – Embarcadero Years (2008-)

On September 4, 2012 Embarcadero released RAD Studio XE3 which included Delphi XE3, and C++Builder.

Embarcadero Delphi – Embarcadero Years (2008-)

Delphi XE3 natively supports both 32-bit and 64-bit editions of Windows (including Windows 8), and provides support for Mac OS X with the Firemonkey 2/FM² framework. iOS support was dropped with XE3 release initially (with intent to add support back in with a separate product – Mobile Studio), but applications can continue to be targeted to that platform by developing with Delphi XE2.

Embarcadero Delphi – Embarcadero Years (2008-)

On April 22, 2013 Embarcadero released RAD Studio XE4 which included Delphi XE4, and C++Builder.

Embarcadero Delphi – Embarcadero Years (2008-)

Delphi XE4 is the first release of the FireMonkey mobile platform, featuring cross-platform Mobile Application development for the iOS Simulator and iOS Devices.

Embarcadero Delphi – Embarcadero Years (2008-)

In this version Embarcadero introduces two new compilers for Delphi Mobile Applications (the Delphi Cross Compiler for the iOS Simulator and the Delphi Cross Compiler for the iOS Devices). These compilers significantly differ from the Win64 desktop compiler as they do not support COM, inline assembly of CPU instructions, and six older string types such as PChar.

Embarcadero Delphi – Embarcadero Years (2008-)

The new mobile compilers advance the notion of eliminating pointers. The new compilers require an explicit style of marshalling data to/from external APIs and libraries.

Embarcadero Delphi – Embarcadero Years (2008-)

Delphi XE4 Run-Time Library (RTL) is optimized for 0-based, read-only (immutable) Unicode strings, that cannot be indexed for the purpose of changing their individual characters. The RTL also adds status-bit based exception routines for ARM CPUs that do not generate exception interrupts.

Embarcadero Delphi – Embarcadero Years (2008-)

On September 12, 2013 Embarcadero released RAD Studio XE5 with includes Delphi XE5 and C++Builder.

Embarcadero Delphi – Embarcadero Years (2008-)

It adds support for Android (specifically: ARM v7 devices running Gingerbread (2.3.3-2.3.7), Ice Cream Sandwich (4.0.3-4.0.4) and Jelly Bean (4.1.x, 4.2.x, 4.3.x)) and iOS 7.

Embarcadero Delphi – Plans

Embarcadero makes available a “roadmap” of plans. As of April 2012 a roadmap for RAD Studio, Delphi and C++Builder was available. The roadmap appears to have been posted on or before September 2009.

Embarcadero Delphi – Editions and prices

Embarcadero publishes feature matrices summarising the differences in functionality.

Embarcadero Delphi – Distinguishing features

Delphi supports rapid application development (RAD) by introducing features such as application framework and visual window layout designer that reduced application prototyping times.

Embarcadero Delphi – Distinguishing features

Delphi supports rapid native cross-compilation.

Embarcadero Delphi – Distinguishing features

Delphi uses the Pascal-based programming language called Object Pascal, and compiles Delphi source code into native x86 code

Embarcadero Delphi – Distinguishing features

Database connectivity is supported, and Delphi supplies several database components. The Visual Component Library (VCL) includes many database-aware and database access components.

Embarcadero Delphi – Distinguishing features

Later versions have included upgraded and enhanced Runtime Library routines provided by the community group FastCode, established in 2003.

Embarcadero Delphi – Advantages

Delphi is a strongly typed high-level programming language, intended to be easy to use and originally based on the earlier Object Pascal language

Embarcadero Delphi – Advantages

Strings can be concatenated by using the ‘+’ operator, rather than using functions. For dedicated string types the programmer does not have to handle memory management as Delphi’s memory manager handles this. The improved memory manager introduced with Borland Developer Studio 2006 provides functions to locate memory leaks.

Embarcadero Delphi – Advantages

The language is suitable for Rapid Application Development (RAD) and Delphi includes an integrated IDE

Embarcadero Delphi – Advantages

The quick optimizing single pass compiler can compile to a single executable, simplifying distribution and eliminating DLL version issues. Delphi can also generate standard DLLs, ActiveX DLLs, COM automation servers and Windows services.

Embarcadero Delphi – Advantages

The Delphi IDEs since Delphi 2005 increasingly support refactoring features such as method extraction and the possibility to create UML models from the source code or to modify the source through changes made in the model.

Embarcadero Delphi – Advantages

Delphi has large communities on Usenet and the web (e.g. newsgroups.codegear.com) which help solving problems of individual developers. Many Codegear employees actively participate in those communities. Voluntary team TeamB also helps out.

Embarcadero Delphi – Advantages

Each new release of Delphi attempts to be as compatible as possible with earlier versions, so that already-developed software and libraries can be retained. Incompatibility necessarily arises as new functionality is added, e.g., with support by Firemonkey of other platforms than Windows.

Embarcadero Delphi – Limitations

The design of the standard class libraries (VCL/RTL) had become somewhat dated and restrictive; Embarcadero released in 2011 as part of Delphi XE2 a new compiler and cross-platform VCL replacement called FireMonkey, based on Direct3D and OpenGL, which runs on other platforms in addition to Windows, supporting their features, but is not fully backwards-compatible with VCL applications

Embarcadero Delphi – RAD Studio

Embarcadero sells RAD Studio, a suite of development tools which consists of Delphi, C++Builder, Embarcadero Prism and HTML5 Builder. Like Delphi, there are different editions of RAD Studio: Professional edition, Enterprise edition, Ultimate edition and Architect edition.

Embarcadero Delphi – InterBase

InterBase integrates natively to Delphi and C++Builder for client/server or embedded development and can be accessed by all major languages and platforms in the market with database connection protocols like ODBC, ADO, ADO.NET and even with Java by JDBC/ODBC Bridge or Java type 4 connectors.

Embarcadero Delphi – JBuilder

Tool for Java development based on Eclipse since version JBuilder 2007.

Embarcadero Delphi – RadPHP (formerly Delphi for PHP)

RadPHP (now superseded by HTML5 Builder) was an IDE for PHP that provides true RAD functionality. It has a form designer similar to that of Delphi or Visual Basic, and an integrated debugger based on the Apache web server. It also includes a VCL library ported to PHP. Unlike other IDEs it supports Web 2.0 features such as AJAX.

Embarcadero Delphi – RadPHP (formerly Delphi for PHP)

Delphi for PHP was announced on March 20, 2007, renamed on October 2010 to RadPHP, and is based on Qadram Q studio. Embarcadero acquired Qadram in January 2011.

Embarcadero Delphi – Delphi Prism

Delphi Prism (now Embarcadero Prism) is a product from Embarcadero based on the Oxygene programming language (previously known as Chrome). Delphi Prism is the replacement for Delphi.NET, which was discontinued. The Prism product runs inside the Visual Studio IDE and it is part of the “RAD Studio” IDE environment.

Embarcadero Delphi – Third-party software

Free Pascal – an open-source Pascal compiler which partially supports Delphi code and works on many operating systems.

Embarcadero Delphi – Third-party software

Lazarus – a RAD IDE developed for and supported by the Free Pascal compiler that runs on Windows, Linux and Mac OS X. A free alternative which works very much like Delphi, and is cross platform.

Embarcadero Delphi – Third-party software

Project Jedi (Joint Endeavor of Delphi Innovators) – A collaborative open-source effort by the Delphi developer community to provide translations of Windows API interfaces, additional components and controls, and algorithms and data structures.

Embarcadero Delphi – Third-party software

FastCode – Enhanced runtime libraries and memory manager.

Embarcadero Delphi – Third-party software

DDDebug – a comprehensive collection of debugging tools for Delphi. DDDebug consists of several modules which cover process-, thread- and exception information as well as detailed analysis about memory management and usage in real time.

Embarcadero Delphi – Third-party software

OmniThreadLibrary – A simple to use multithreading library for Delphi

Embarcadero Delphi – Third-party software

kbmMemTable – The fastest and most feature rich memory table.

Embarcadero Delphi – Third-party software

kbmMW – A very complete and advanced n-tier development framework with support for 35+ databases, true transactional, queue based publish/subscribe, XML, JSON, REST, HTTP and many many more features.

Embarcadero Delphi – Third-party software

SDL Component Suite – a collection of components supporting scientific and engineering computing.

Embarcadero Delphi – Third-party software

Quadruple D, a DirectX library for Delphi

Forensic psychology – Academic researcher

Academic forensic psychologists engage in teaching, research, training and supervision of students, and other education-related activities

Islamophobia – Academic and political debate

Paul Jackson, in a critical study of the anti-Islamic English Defence League, argues that the term Islamophobia creates a stereotype where “any criticism of Muslim societies [can be] dismissed …” The term feeds “a language of polarised polemics … to close down discussion on genuine areas of criticism …” Consequently, the term is “losing much [of its] analytical value”.

Islamophobia – Academic and political debate

Professor Eli Göndör wrote that the term Islamophobia should be replaced with “muslimophobia”.

Islamophobia – Academic and political debate

Professor Mohammad H

Islamophobia – Academic and political debate

Other critics argue that the term conflates criticism of “Islamic totalitarianism” with hatred of Muslims.

Islamophobia – Academic and political debate

In the wake of the Jyllands-Posten Muhammad cartoons controversy, a group of 12 writers, including novelist Salman Rushdie, signed a manifesto entitled Together facing the new totalitarianism in the French weekly satirical newspaper Charlie Hebdo, warning against the use of the term Islamophobia to prevent criticism of “Islamic totalitarianism”

Islamophobia – Academic and political debate

Alan Posener and Alan Johnson have written that, while the idea of Islamophobia is sometimes misused, those who claim that hatred of Muslims is justified as opposition to Islamism actually undermine the struggle against Islamism

University – Academic freedom

Today this is claimed as the origin of “academic freedom”

F-Secure – Academia

In co-operation with Aalto University School of Science and Technology, F-Secure runs a one semester course for future virus analysts, with some material available on-line.

Psychological manipulation – Academic journals

Aglietta M, Reberioux A, Babiak P. “Psychopathic manipulation in organizations: pawns, patrons and patsies”, in Cooke A, Forth A, Newman J, Hare R (Eds), International Perspectives and Psychopathy, British Psychological Society, Leicester, pp. 12–17. (1996)

Psychological manipulation – Academic journals

Aglietta, M.; Reberioux, A.; Babiak, P. “Psychopathic manipulation at work”, in Gacono, C.B. (Ed), The Clinical and Forensic Assessment of Psychopathy: A Practitioner’s Guide, Erlbaum, Mahwah, NJ, pp. 287–311. (2000)

Psychological manipulation – Academic journals

Bursten, Ben. “The Manipulative Personality”, Archives of General Psychiatry, Vol 26 No 4, 318-321 (1972)

Psychological manipulation – Academic journals

Buss DM, Gomes M, Higgins DS, Lauterback K. “Tactics of Manipulation”, Journal of Personality and Social Psychology, Vol 52 No 6 1219-1279 (1987)

ISO/IEC 20000 – Academic resources

International Journal of IT Standards and Standardization Research, ISSN: 1539-3054 (internet), 1539-3062 (print), Information Resources Management Association

ISO/IEC 20000 – Academic resources

ISO20000-1:2011 released at 2011-04-12, ISO / IEC 20000 An Introduction ISBN 978-90-8753-081-5, Implementing ISO/IEC 20000 Certification – The Roadmap ISBN 978-90-8753-082-2, ISO/IEC 20000: A Pocket Guide ISBN 978-90-77212-79-0,

Evolutionary psychology – Academic societies

Human Behavior and Evolution Society; international society dedicated to using evolutionary theory to study human nature

Evolutionary psychology – Academic societies

The International Society for Human Ethology; promotes ethological perspectives on the study of humans worldwide

Evolutionary psychology – Academic societies

European Human Behaviour and Evolution Association an interdisciplinary society that supports the activities of European researchers with an interest in evolutionary accounts of human cognition, behavior and society

Evolutionary psychology – Academic societies

The Association for Politics and the Life Sciences; an international and interdisciplinary association of scholars, scientists, and policymakers concerned with evolutionary, genetic, and ecological knowledge and its bearing on political behavior, public policy and ethics.

Evolutionary psychology – Academic societies

Society for Evolutionary Analysis in Law a scholarly association dedicated to fostering interdisciplinary exploration of issues at the intersection of law, biology, and evolutionary theory

Evolutionary psychology – Academic societies

The New England Institute for Cognitive Science and Evolutionary Psychology aims to foster research and education into the interdisciplinary nexus of cognitive science and evolutionary studies

Evolutionary psychology – Academic societies

The NorthEastern Evolutionary Psychology Society; regional society dedicated to encouraging scholarship and dialogue on the topic of evolutionary psychology

Evolutionary psychology – Academic societies

Feminist Evolutionary Psychology Society researchers that investigate the active role that females have had in human evolution

Cardiff University – Academic facilities

The University’s academic facilities are centred around Cathays Park in central Cardiff, which contains the University’s main building, housing administrative facilities and the science library; the Bute building, which contains the Welsh School of Architecture and the Cardiff School of Journalism, Media and Cultural Studies; the Glamorgan building, which houses the Cardiff School of Social Sciences, the Redwood Building, which houses the School of Pharmacy and Pharmaceutical Sciences; the law building which houses the Cardiff Law School; and the biosciences building, which provides facilities for both biosciences and medical teaching.

Cardiff University – Academic facilities

A number of the University’s academic facilities are also located at the Heath Park campus which is based at the University Hospital of Wales, this contains the Cardiff University School of Medicine, the School of Nursing and Midwifery Studies, the School of Dentistry, the School of Healthcare Studies and the School of Optometry & Vision Sciences.

Cardiff University – Academia

Professor Yehuda Bauer, Professor of Holocaust Studies at the Avraham Harman Institute of Contemporary Jewry at the Hebrew University of Jerusalem

Cardiff University – Academia

Professor Leszek Borysiewicz, Deputy Rector of Imperial College London and Chief Executive of the Medical Research Council. Vice-Chancellor of the University of Cambridge

Cardiff University – Academia

Dr. Sheila Cameron QC, lawyer and ecclesiastical judge

Cardiff University – Academia

Rt Revd Paul Colton, Bishop of Cork, Cloyne and Ross

Cardiff University – Academia

Professor Alun Davies, bioscientist

Cardiff University – Academia

Jonathan Deibel, leading researcher into Wirewound Resistors

Cardiff University – Academia

Professor Dr Robert Huber, Professor of Chemistry, Nobel Laureate – The Nobel Prize in Chemistry1988

Cardiff University – Academia

Professor Vaughan Lowe QC, (Chichele Professor of Public International Law in the University of Oxford

Cardiff University – Academia

John Warwick Montgomery – American lawyer, theologian and academic known for his work in the field of Christian Apologetics; Distinguished Research Professor of Philosophy and Christian Thought at Patrick Henry College.

Cardiff University – Academia

Professor Sir Keith Peters, FRS PMedSci (Regius Professor of Physic in the University of Cambridge)

Cardiff University – Academia

The Rt Revd Dominic Walker, OGS, Bishop of Monmouth

Cardiff University – Academia

Chandra Wickramasinghe, professor of Applied Mathematics – one of the foremost authorities on organic cosmic dust

Cardiff University – Academia

Dr Mansoor Alawar – The Chancellor of Hamdan Bin Mohammed e-University, Dubai.

Cardiff University – Academia

Dr Jamal Alsumaiti – Director General, Dubai Judicial Institute

Dow Chemical Company – Lab Safety Academy

The Dow Lab Safety Academy is also available through the Safety and Chemical Engineering Education program, an affiliate of American Institute of Chemical Engineers (AIChE); and The Campbell Institute, an organization focusing on environment, health and safety practices.

Dow Chemical Company – Lab Safety Academy

Seeking to share industry best practices with academia, Dow partnered with several U.S

Cyborg anthropology – Anthropology in industry vs. academia

One of the central questions of cyborg anthropology is the relationship between scholarship and technological implementation

Cyborg anthropology – Anthropology in industry vs. academia

The same dynamic exists in cyborg anthropology

DNA nanotechnology – Strand displacement cascades

Strand displacement cascades allow for isothermal operation of the assembly or computational process, as opposed to traditional nucleic acid assembly’s requirement for a thermal annealing step, where the temperature is raised and then slowly lowered to ensure proper formation of the desired structure

DNA nanotechnology – Strand displacement cascades

Strand displacement complexes can be used to make molecular logic gates capable of complex computation

DNA nanotechnology – Strand displacement cascades

Another use of strand displacement cascades is to make dynamically assembled structures. These use a hairpin structure for the reactants, so that when the input strand binds, the newly revealed sequence is on the same molecule rather than disassembling. This allows new opened hairpins to be added to a growing complex. This approach has been used to make simple structures such as three- and four-arm junctions and dendrimers.

Rachel Carson Prize (academic book prize)

The Rachel Carson Prize is awarded annually by the Society for Social Studies of Science, an international academic association based in the United States. It is given for a book “of social or political relevance” in the field of science and technology studies. This prize was created in 1996.

Rachel Carson Prize (academic book prize) – Honorees

2011. Lynn M. Morgan, Icons of Life: A Cultural History of Human Embryos

Rachel Carson Prize (academic book prize) – Honorees

2010. Susan Greenhalgh, Just One Child

Rachel Carson Prize (academic book prize) – Honorees

2008. Joseph Masco, The Nuclear Borderlands: The Manhattan Project in Post-Cold War New Mexico

Rachel Carson Prize (academic book prize) – Honorees

2007. Charis Thompson, Making Parents: The Ontological Choreography of Reproductive Technologies

Rachel Carson Prize (academic book prize) – Honorees

2006. Joseph Dumit, Picturing Personhood: Brain Scans and Biomedical Identity

Rachel Carson Prize (academic book prize) – Honorees

2005. Nelly Oudshoorn, The Male Pill

Rachel Carson Prize (academic book prize) – Honorees

2004. Jean Langford, Fluent Bodies

Rachel Carson Prize (academic book prize) – Honorees

2003. Simon Cole, Suspect Identities: A History of Fingerprinting and Criminal Identification

Rachel Carson Prize (academic book prize) – Honorees

2002. Stephen Hilgartner, Science On Stage: Expert Advice as Public Drama

Rachel Carson Prize (academic book prize) – Honorees

2001. Andrew Hoffman. From Heresy to Dogma: An Institutional History of Corporate Environmentalism

Rachel Carson Prize (academic book prize) – Honorees

2000. Wendy Espeland. The Struggle for Water: Politics, Rationality, and Identity in the American Southwest

Rachel Carson Prize (academic book prize) – Honorees

1999. Steven Epstein, Impure Science: AIDS, Activism, and the Politics of Knowledge.

Rachel Carson Prize (academic book prize) – Honorees

1998. Diane Vaughan, The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA.

Educational software – Selected reports and academic articles

Virvou, M., Katsionis, G., & Manos, K. (2005). “Combining Software Games with Education: Evaluation of its Educational Effectiveness.” Educational Technology & Society, 8 (2), 54-65.

Educational software – Selected reports and academic articles

“An Environmental Scan of Children’s Interactive Media from 2000 to 2002” (An executive summary prepared for by Just Kid Inc., June 2002)

Educational software – Selected reports and academic articles

Seels, B. (1989). The instructional design movement in educational technology. Educational Technology, May, 11-15. www.coe.uh.edu/courses/cuin6373/idhistory/1960.html

Educational software – Selected reports and academic articles

Niemiec, R.P. & Walberg, H.T. (1989). From teaching machines to microcomputers: Some milestones in the history of computer-based instruction. Journal of Research on Computing in Education, 21(3), 263-276.

Educational software – Selected reports and academic articles

Annetta, L., Minogue, J., Holmes, S., & Cheng, M

Informatics (academic field)

Importantly however, informatics as an academic field is not explicitly dependent upon technological aspects of information, while computer science and information technology are.

Informatics (academic field) – Etymology

In 1957 the German computer scientist Karl Steinbuch coined the word Informatik by publishing a paper called Informatik: Automatische Informationsverarbeitung (“Informatics: Automatic Information Processing”). The English term Informatics is sometimes understood as meaning the same as computer science. The German word Informatik is usually translated to English as computer science.

Informatics (academic field) – Etymology

The French term informatique was coined in 1962 by Philippe Dreyfus together with various translations—informatics (English), also proposed independently and simultaneously by Walter F. Bauer and associates who co-founded Informatics Inc., and informatica (Italian, Spanish, Romanian, Portuguese, Dutch), referring to the application of computers to store and process information.

Informatics (academic field) – Etymology

The term was coined as a combination of “information” and “automatic” to describe the science of automating information interactions. The morphology—informat-ion + -ics—uses “the accepted form for names of sciences, as conics, linguistics, optics, or matters of practice, as economics, politics, tactics”, and so, linguistically, the meaning extends easily to encompass both the science of information and the practice of information processing.

Informatics (academic field) – History

This new term was adopted across Western Europe, and, except in English, developed a meaning roughly translated by the English ‘computer science’, or ‘computing science’

Informatics (academic field) – History

Informatics is the discipline of science which investigates the structure and properties (not specific content) of scientific information, as well as the regularities of scientific information activity, its theory, history, methodology and organization.

Informatics (academic field) – History

Usage has since modified this definition in three ways

Informatics (academic field) – History

In the English-speaking world the term informatics was first widely used in the compound, ‘medical informatics’, taken to include “the cognitive, information processing, and communication tasks of medical practice, education, and research, including information science and the technology to support these tasks”. Many such compounds are now in use; they can be viewed as different areas of applied informatics.

Informatics (academic field) – History

Informatics encompasses the study of systems that represent, process, and communicate information

Informatics (academic field) – History

In 1989, the first International Olympiad in Informatics (IOI) was held in Bulgaria. The olympiad involves two five hour days of intense competition. Four students are selected from each participating country to attend and compete for Gold, Silver, and Bronze medals. The 2008 IOI was held in Cairo, Egypt.

Informatics (academic field) – History

The first example of a degree level qualification in Informatics occurred in 1982 when Plymouth Polytechnic (now the University of Plymouth) offered a four year BSc(Honours) degree in Computing and Informatics – with an initial intake of only 35 students. The course still runs today making it the longest available qualification in the subject.

Informatics (academic field) – History

A broad interpretation of informatics, as “the study of the structure, algorithms, behaviour, and interactions of natural and artificial computational systems,” was introduced by the University of Edinburgh in 1994 when it formed the grouping that is now its School of Informatics. This meaning is now (2006) increasingly used in the United Kingdom.

Informatics (academic field) – History

The 2008 Research Assessment Exercise, of the UK Funding Councils, includes a new, Computer Science and Informatics, unit of assessment (UoA), whose scope is described as follows:

Informatics (academic field) – History

The UoA includes the study of methods for acquiring, storing, processing, communicating and reasoning about information, and the role of interactivity in natural and artificial systems, through the implementation, organisation and use of computer hardware, software and other resources. The subjects are characterised by the rigorous application of analysis, experimentation and design.

Informatics (academic field) – History

At the Indiana University School of Informatics (Bloomington, Indianapolis and Southeast), informatics is defined as “the art, science and human dimensions of information technology” and “the study, application, and social consequences of technology.” It is also defined in Informatics 101, Introduction to Informatics as “the application of information technology to the arts, sciences, and professions.” These definitions are widely accepted in the United States, and differ from British usage in omitting the study of natural computation.

Informatics (academic field) – History

At the University of California, Irvine Department of Informatics, informatics is defined as “the interdisciplinary study of the design, application, use and impact of information technology

Informatics (academic field) – History

At the University of Michigan, Ann Arbor Informatics interdisciplinary major, informatics is defined as “the study of information and the ways information is used by and affects human beings and social systems

Informatics (academic field) – History

Internet Informatics: An applied track in which students experiment with technologies behind Internet-based information systems and acquire skills to map problems to deployable Internet-based solutions. This track will replace Computational Informatics in Fall 2013.

Informatics (academic field) – History

Data Mining & Information Analysis: Integrates the collection, analysis, and visualization of complex data and its critical role in research, business, and government to provide students with practical skills and a theoretical basis for approaching challenging data analysis problems.

Informatics (academic field) – History

Social Computing: Advances in computing have created opportunities for studying patterns of social interaction and developing systems that act as introducers, recommenders, coordinators, and record-keepers

Informatics (academic field) – History

One of the most significant areas of applied informatics is that of organizational informatics. Organisational informatics is fundamentally interested in the application of information, information systems and ICT within organisations of various forms including private sector, public sector and voluntary sector organisations. As such, organisational informatics can be seen to be sub-category of social informatics and a super-category of business informatics.

Informatics (academic field) – Contributing disciplines

Didactics of informatics (Didactics of computer science)

Informatics (academic field) – Notes

Karl Steinbuch Eulogy – Bernard Widrow, Reiner Hartenstein, Robert Hecht-Nielsen

Informatics (academic field) – Notes

Dreyfus, Phillipe. L’informatique. Gestion, Paris, June 1962, pp. 240–41

Informatics (academic field) – Notes

Mikhailov, A.I., Chernyl, A.I., and Gilyarevskii, R.S. (1966) “Informatika – novoe nazvanie teorii nau?noj informacii.” Nau?no tehni?eskaja informacija, 12, pp. 35–39.

Informatics (academic field) – Notes

Greenes, R.A. and Shortliffe, E.H. (1990) “Medical Informatics: An emerging discipline with academic and institutional perspectives.” Journal of the American Medical Association, 263(8) pp. 1114–20.

Informatics (academic field) – Notes

BSc(Hons) Computing Informatics – University of Plymouth Link

Informatics (academic field) – Notes

For example, at University of Reading, Sussex, City University, Ulster, Bradford, Manchester and Newcastle

Informatics (academic field) – Notes

UoA 23 Computer Science and Informatics, Panel working methods

Informatics (academic field) – Notes

“Curriculum – Informatics – University of Michigan”. University of Michigan. Retrieved 6 February 2013.

Informatics (academic field) – Notes

“Concentration: Informatics”. University of Michigan. Retrieved 8 February 2013.

Informatics (academic field) – Notes

“UMSI plans new undergraduate degree”. University of Michigan School of Information. Retrieved 11 February 2013.

Informatics (academic field) – Notes

Beynon-Davies P. (2002). Information Systems: an introduction to informatics in Organisations. Palgrave, Basingstoke, UK. ISBN 0-333-96390-3

Informatics (academic field) – Notes

Beynon-Davies P. (2009). Business Information Systems. Palgrave, Basingstoke, UK. ISBN 978-0-230-20368-6

Pink – Academic dress

In the French academic dress system, the five traditional fields of study (Arts, Science, Medicine, Law and Divinity) are each symbolized by a distinctive color, which appears in the academic dress of the people who graduated in this field. Redcurrant, an extremely red shade of pink, is the distinctive color for Medicine (and other health-related fields) fr:Groseille (couleur).

Publishing – Academic publishing

Academic publishers are typically either book or periodical publishers that have specialized in academic subjects. Some, like university presses, are owned by scholarly institutions. Others are commercial businesses that focus on academic subjects.

Publishing – Academic publishing

The development of the printing press represented a revolution for communicating the latest hypotheses and research results to the academic community and supplemented what a scholar could do personally. But this improvement in the efficiency of communication created a challenge for libraries, which have had to accommodate the weight and volume of literature.

Publishing – Academic publishing

One of the key functions that academic publishers provide is to manage the process of peer review. Their role is to facilitate the impartial assessment of research and this vital role is not one that has yet been usurped, even with the advent of social networking and online document sharing.

Publishing – Academic publishing

An alternative approach to the corporate model is open access, the online distribution of individual articles and academic journals without charge to readers and libraries

Occupational health psychology – Development after 1990: academic societies and specialized journals

In 1999, the European Academy of Occupational Health Psychology (EA-OHP) was established

University of Texas at Austin – Academics

The University of Texas at Austin offers more than 100 undergraduate and 170 graduate degrees. In the 2009–2010 academic year, the university awarded a total of 13,215 degrees: 67.7% bachelor’s degrees, 22.0% master’s degrees, 6.4% doctoral degrees, and 3.9% Professional degrees.

University of Texas at Austin – Academics

The university also offers innovative programs for promoting academic excellence and leadership development such as the Freshman Research Initiative and Texas Interdisciplinary Plan.

History of science – Academic study

As an academic field, history of science began with the publication of William Whewell’s History of the Inductive Sciences (first published in 1837)

History of science – Academic study

The history of mathematics, history of technology, and history of philosophy are distinct areas of research and are covered in other articles. Mathematics is closely related to but distinct from natural science (at least in the modern conception). Technology is likewise closely related to but clearly differs from the search for empirical truth.

History of science – Academic study

History of science is an academic discipline, with an international community of specialists. Main professional organizations for this field include the History of Science Society, the British Society for the History of Science, and the European Society for the History of Science.

Cambridge University Press – Academic and Professional

This group publishes monographs, academic journals, textbooks and reference books in science, technology, medicine, humanities, and social sciences. The group also publishes bibles, and the Press is one of only two publishers entitled to publish the Book of Common Prayer and the King James Version of the Bible in England.

Enterprise architecture – Academic qualifications

Enterprise Architecture was included in the Association for Computing Machinery (ACM) and Association for Information Systems (AIS)’s Curriculum for Information Systems as one of the 6 core courses.

Enterprise architecture – Academic qualifications

A new MSc in Enterprise Architecture was introduced at the University of East London in collaboration with Iasa to start February 2013.

Enterprise architecture – Academic qualifications

There are several universities that offer enterprise architecture as a fourth year level course or part of a master’s syllabus. California State University offers a post-baccalaureate certificate in enterprise architecture, in conjunction with FEAC Institute.

Enterprise architecture – Academic qualifications

National University offers a Master of Science in Engineering Management with specialization in Enterprise Architecture, again in conjunction with FEAC Institute. The Center for Enterprise Architecture at the Penn State University is one of these institutions that offer EA courses. It is also offered within the Masters program in Computer Science at The University of Chicago.

Enterprise architecture – Academic qualifications

In 2010 researchers at the Meraka Institute, Council for Scientific and Industrial Research, in South Africa organized a workshop and invited staff from computing departments in South African higher education institutions. The purpose was to investigate the current status of EA offerings in South Africa. A report was compiled and is available for download at the Meraka Institute.

Malware – Academic research

The notion of a self-reproducing computer program can be traced back to initial theories about the operation of complex automata

David Patterson (scientist) – Research and academic Contributions

He is an important proponent of the concept of Reduced Instruction Set Computer and coined the term “RISC”. He led the Berkeley RISC project from 1980 and onwards along with Carlo H. Sequin, where the technique of register windows was introduced. He is also one of the innovators of the Redundant Arrays of Independent Disks (RAID) (in collaboration with Randy Katz and Garth Gibson), and Network of Workstations (NOW) (in collaboration with Eric Brewer and David Culler).

John Everett Millais – Academic career

After the death of Lord Leighton in 1896, Millais was elected President of the Royal Academy, but he died later in the same year from throat cancer

University of California, Irvine – Academics

The remaining academic units offer accelerated or community education in the form of Summer Session and UC Irvine Extension

University of California, Irvine – Academics

Henry Samueli School of Engineering

University of California, Irvine – Academics

Donald Bren School of Information and Computer Sciences

University of California, Irvine – Academics

Paul Merage School of Business

Gerald Jay Sussman – Academic work

Sussman is a coauthor (with Hal Abelson and Julie Sussman) of the introductory computer science textbook Structure and Interpretation of Computer Programs. It was used at MIT for several decades, and has been translated into several languages.

Gerald Jay Sussman – Academic work

Sussman’s contributions to Artificial Intelligence include problem solving by debugging almost-right plans, propagation of constraints applied to electrical circuit analysis and synthesis, dependency-based explanation and dependency-based backtracking, and various language structures for expressing problem-solving strategies. Sussman and his former student, Guy L. Steele Jr., invented the Scheme programming language in 1975.

Gerald Jay Sussman – Academic work

Sussman saw that Artificial Intelligence ideas can be applied to computer-aided design

Gerald Jay Sussman – Academic work

Using the Digital Orrery, Sussman has worked with Jack Wisdom to discover numerical evidence for chaotic motions in the outer planets

Gerald Jay Sussman – Academic work

Over the past decade Sussman and Wisdom have developed a subject that uses computational techniques to communicate a deeper understanding of advanced classical mechanics

Gerald Jay Sussman – Academic work

Sussman and Abelson have also been a part of the Free Software Movement, including releasing MIT/GNU Scheme as free software, and serving on the Board of Directors of the Free Software Foundation,

Duke University – Academics

Duke’s student body consists of 6,484 undergraduates and 8,107 graduate and professional students (as of fall 2012). The university has “historic and symbolic ties to the Methodist Church but it always has been independent in its governance.”

Duke University – Academics

Admission to Duke is highly selective; Duke received 31,785 applications in 2013, and admitted 11.6% of applicants

Duke University – Academics

Duke University has two schools for undergraduates: Trinity College of Arts and Sciences and Pratt School of Engineering.

Duke University – Academics

Duke Memorial Scholarship, awarded for academic excellence

Duke University – Academics

Duke’s endowment had a market value of $6.0 billion in the fiscal year that ended June 30, 2013. The University’s special academic facilities include an art museum, several language labs, the Duke Forest, the Duke Herbarium, a lemur center, a phytotron, a free electron laser, a nuclear magnetic resonance machine, a nuclear lab, and a marine lab. Duke is a leading participant in the National Lambda Rail Network and runs a program for gifted children known as the Talent Identification Program.

Facade pattern

The facade pattern (or façade pattern) is a software design pattern commonly used with object-oriented programming. The name is by analogy to an architectural facade.

Facade pattern

A facade is an object that provides a simplified interface to a larger body of code, such as a class library. A facade can:

Facade pattern

make a software library easier to use, understand and test, since the facade has convenient methods for common tasks;

Facade pattern

reduce dependencies of outside code on the inner workings of a library, since most code uses the facade, thus allowing more flexibility in developing the system;

Facade pattern – Usage

A Facade is used when one wants an easier or simpler interface to an underlying implementation object. Alternatively, an adapter is used when the wrapper must respect a particular interface and must support polymorphic behavior. A decorator makes it possible to add or alter behavior of an interface at run-time.

Facade pattern – Structure

The objects are using the Facade Pattern to access resources from the Packages.

Facade pattern – Example

This is an abstract example of how a client (“you”) interacts with a facade (the “computer”) to a complex system (internal computer parts, like CPU and HardDrive).

Facade pattern – Example

public void load(long position, byte[] data) { … }

Facade pattern – Example

public byte[] read(long lba, int size) { … }

Facade pattern – Example

private CPU processor;

Facade pattern – Example

private ComputerFacade() {

Narcissistic leadership – Academic papers

Brown B Narcissistic Leaders: Effectiveness and the Role of Followers – Otago Management Graduate Review Volume 3 2005 Pages 69–87

Narcissistic leadership – Academic papers

Horowitz MJ & Arthur RJ Narcissistic Rage in Leaders: the Intersection of Individual Dynamics and Group Process – International Journal of Social Psychiatry 1988 Summer;34(2) Pages 135-41

Narcissistic leadership – Academic papers

Horwitz L Narcissistic leadership in psychotherapy groups – International Journal of Group Psychotherapy 2000 Apr;50(2) Pages 219-35.

Narcissistic leadership – Academic papers

Jones R, Lasky B, Russell-Gale H & le Fevre M Leadership and the development of dominant and countercultures: A narcissistic perspective – Leadership & Organization Development Journal, Vol. 25 Issue 2, Pages 216-233 (2004)

Narcissistic leadership – Academic papers

Kearney KS Grappling with the gods: Reflections for coaches of the narcissistic leader – International Journal of Evidence Based Coaching and Mentoring Vol 8 No 1 February 2010 Pages 1–13

Narcissistic leadership – Academic papers

Kets de Vries MFR & Miller D Narcissism and leadership: An object relations perspective – Human Relations (1985) 38(6) Pages 583-601.

Narcissistic leadership – Academic papers

Ouimet G Dynamics of narcissistic leadership in organizations: Towards an integrated research model – Journal of Managerial Psychology, Vol. 25 Issue 7, Pages 713-726 (2010)

Narcissistic leadership – Academic papers

Rosenthal SA & Pittinsky TL Narcissistic leadership – The Leadership Quarterly Volume 17, Issue 6, December 2006, Pages 617-633

Narcissistic leadership – Academic papers

Volkan VD & Fowler C Large-group Narcissism and Political Leaders with Narcissistic Personality Organization – Psychiatric Annals 39:4 April 2009

Social science – Academic resources

The ANNALS of the American Academy of Political and Social Science, ISSN: 1552-3349 (electronic) ISSN: 0002-7162 (paper), SAGE Publications

Social science – Academic resources

Efferson, C. & Richerson, P.J. (In press). A prolegomenon to nonlinear empiricism in the human behavioral sciences. Philosophy and Biology. Full text

Age of Enlightenment – Learned academies

Academies demonstrate the rising interest in science along with its increasing secularization, as evidenced by the small number of clerics who were members (13 percent).

Age of Enlightenment – Learned academies

The presence of the French academies in the public sphere cannot be attributed to their membership; although the majority of their members were bourgeois, the exclusive institution was only open to elite Parisian scholars. They did perceive themselves to be “interpreters of the sciences for the people”. Indeed, it was with this in mind that academians took it upon themselves to disprove the popular pseudo-science of mesmerism.

Age of Enlightenment – Learned academies

However, the strongest case for the French Academies’ being part of the public sphere comes from the concours académiques (roughly translated as ‘academic contests’) they sponsored throughout France. As Jeremy L. Caradonna argues in a recent article in the Annales, “Prendre part au siècle des Lumières: Le concours académique et la culture intellectuelle au XVIIIe siècle”, these academic contests were perhaps the most public of any institution during the Enlightenment.

Age of Enlightenment – Learned academies

L’Académie française revived a practice dating back to the Middle Ages when it revived public contests in the mid-17th century

Age of Enlightenment – Learned academies

More importantly, the contests were open to all, and the enforced anonymity of each submission guaranteed that neither gender nor social rank would determine the judging. Indeed, although the “vast majority” of participants belonged to the wealthier strata of society (“the liberal arts, the clergy, the judiciary, and the medical profession”), there were some cases of the popular classes submitting essays, and even winning.

Age of Enlightenment – Learned academies

Similarly, a significant number of women participated – and won – the competitions. Of a total of 2 300 prize competitions offered in France, women won 49 – perhaps a small number by modern standards, but very significant in an age in which most women did not have any academic training. Indeed, the majority of the winning entries were for poetry competitions, a genre commonly stressed in women’s education.

Age of Enlightenment – Learned academies

In England, the Royal Society of London also played a significant role in the public sphere and the spread of Enlightenment ideas

Age of Enlightenment – Learned academies

However, not just any witness was considered to be credible; “Oxford professors were accounted more reliable witnesses than Oxfordshire peasants.” Two factors were taken into account: a witness’s knowledge in the area; and a witness’s “moral constitution”. In other words, only civil society were considered for Boyle’s public.

Art blog – Academia

In December 2008, the art blog The Dump, where the new-media artist Maurice Benayoun dumped hundreds of undone art projects, was the first to become a doctorate thesis in art and art science in and of itself: Artistic Intentions at Work, Hypothesis for Committing Art Université Pantheon Sorbonne (December 6, 2008) This PhD was directed by Prof. Anne-Marie Duguet. Jury : Prof. Hubertus von Amelunxen, Louis Bec, artist, Prof. Derrick de Kerckhove, and Prof. Jean da Silva.

Art blog – Academia

In May 2010, The Dump – Recycling of Thoughts, a contemporary art exhibition curated by Agnieszka Kulazi?ska at Laznia Art Center (Gdansk, Poland) presented 9 artists whose works were derived from The Dump blog project list.

Certified Fraud Examiner – Academic requirements

Generally, applicants for CFE certification have a minimum of a bachelor’s degree or equivalent from an institution of higher education. Two years of professional experience related to fraud can be substituted for each year of college.

Knowledge-based engineering – KBE in Academia

Knowledge-based engineering at the Norwegian University of Science and Technology (NTNU)

Knowledge-based engineering – KBE in Academia

Knowledge Based Engineering department at the Faculty of Aerospace Engineering of the Delft University of Technology

Knowledge-based engineering – KBE in Academia

See Webliography for AI in Design hosted by Worcester Polytechnic Institute and the NSF Report “Research Opportunities in Engineering Design.”

Stanford University – Academics

Stanford University is a large, highly residential research university with a majority of enrollments coming from graduate and professional students. The full-time, four-year undergraduate program is classified as “more selective, lower transfer-in” and has an arts and sciences focus with high graduate student coexistence. Stanford University is accredited by the Western Association of Schools and Colleges. Full-time undergraduate tuition was $38,700 for 2010–2011.

Stanford University – Academics

The schools of Humanities and Sciences (27 departments), Engineering (9 departments), and Earth Sciences (4 departments) have both graduate and undergraduate programs while the schools of Law, Medicine, and Education and the Graduate School of Business have graduate programs only. Stanford follows a quarter system with Autumn quarter usually starting in late September and Spring Quarter ending in early June.

John C. Reynolds – Academic work

Reynolds’s main research interest was in the area of programming language design and associated specification languages, especially concerning formal semantics

John C. Reynolds – Academic work

He had been an editor of journals such as the Communications of the ACM and the Journal of the ACM. In 2001, he was appointed a Fellow of the ACM. He won the ACM SIGPLAN Programming Language Achievement Award in 2003, and the Lovelace Medal from the British Computer Society in 2010.

Ian Sommerville (academic)

Ian F. Sommerville, (born 1951) is a British academic. He is Professor of Software engineering at the University of St Andrews in Scotland and the author of a popular student textbook on software engineering, as well as a number of other books and papers. He is a prominent researcher in the field of systems engineering, system dependability and social informatics, being an early advocate of an interdisciplinary approach to system dependability.

Ian Sommerville (academic) – Education and personal life

Ian Sommerville was born in Glasgow, Scotland in 1951. He studied Physics at Strathclyde University and Computer Science at the University of St Andrews. He is married and has two daughters. As an amateur gourmet, he has written a number of restaurant reviews.

Ian Sommerville (academic) – Academic career

Ian Sommerville was a lecturer in Computer Science at Heriot-Watt University in Edinburgh, Scotland from 1975 to 1978 and at Strathclyde University, Glasgow from 1978-86. From 1986 to 2006, he was Professor of Software Engineering in the Computing Department at the University of Lancaster, and in April 2006 he joined the School of Computer Science at St Andrews University, where he teaches courses in advanced software engineering and critical systems engineering.

Ian Sommerville (academic) – Academic career

He has worked on a number of European projects involving collaboration between academia and commercial enterprises, such as the ESPRIT project REAIMS (Requirements Engineering adaptation and improvement for safety and dependability).

Ian Sommerville (academic) – Public activities

In 2006, Ian Sommerville was one of 23 academics in the computer field who wrote open letters calling for an independent audit of the British National Health Service’s proposed Programme for IT (NPfIT) and expressing concern about the GBP 12.4 billion programme.

Ian Sommerville (academic) – Publications

Most widely read of Sommerville’s publications is probably his student text book “Software Engineering”, currently in its 9th edition along with other textbooks Sommerville has also authored or co-authored numerous peer reviewed articles, papers.

Modeling and simulation – M&S Science contributes to the Theory of M&S, defining the academic foundations of the discipline.

M&S Engineering is rooted in Theory but looks for applicable solution patterns. The focus is general methods that can be applied in various problem domains.

Modeling and simulation – M&S Science contributes to the Theory of M&S, defining the academic foundations of the discipline.

M&S Applications solve real world problems by focusing on solutions using M&S. Often, the solution results from applying a method, but many solutions are very problem domain specific and are derived from problem domain expertise and not from any general M&S theory or method.

Modeling and simulation – Academic Modeling and Simulation Programs

Modeling and Simulation has only recently become an academic discipline of its own. Formerly, those working in the field usually had a background in engineering.

Modeling and simulation – Academic Modeling and Simulation Programs

The following institutions offer degrees in Modeling and Simulation:

Modeling and simulation – Academic Modeling and Simulation Programs

Old Dominion University (Norfolk, VA)

Modeling and simulation – Academic Modeling and Simulation Programs

University of Alabama in Huntsville (Huntsville, AL)

Modeling and simulation – Academic Modeling and Simulation Programs

University of Central Florida (Orlando, FL)

Modeling and simulation – Academic Modeling and Simulation Programs

Embry Riddle Aeronautical University (Daytona beach, Florida)

Modeling and simulation – Academic Modeling and Simulation Programs

University of New South Wales (Australia)

Modeling and simulation – Academic Modeling and Simulation Programs

Center for Modeling and Simulation(M.Tech(Modelling & Simulation)) (University of Pune, India)

Modeling and simulation – Academic Modeling and Simulation Programs

Columbus State University (Columbus, GA)

Medical device – Academic resources

Medical & Biological Engineering & Computing

Medical device – Academic resources

Journal of Clinical Engineering

Medical device – Academic resources

A number of specialist University-based research institutes have been established such as the Medical Devices Center (MDC) at the University of Minnesota in the US, the Strathclyde Institute Of Medical Devices (SIMD) at the University of Strathclyde in Scotland and the Medical Device Research Institute (MDRI) at Flinders University in Australia.

National Academies Press

The NAP’s stated mission is seemingly self-contradictory: to disseminate as widely as possible the works of the National Academies, and to be financially self-sustaining through sales

National Academies Press

The National Academy Press (as it was known in 1993) was the first self-sustaining publisher to make its material available on the Web, for free, in an open access model. By 1997, 1000 reports were available as sequential page images (starting with i, then ii, then iii, then iv…), with a minimal navigational envelope. Their experience up to 1998 was already indicating that open access led to increased sales, at least with page images as the final viewable object.

National Academies Press

From 1998 on, the NAP developed the “Openbook” online navigational envelope, producing stable page URLs, and enabling chapter-, page-, and in-book search navigation to images of the book pages (which were increasingly replaced by HTML chunks), to enable the user to browse the book. Notably, this page-by-page navigation was produced long before Amazon’s Look Inside, or Google’s Book Search.

National Academies Press

1998 through the present, the NAP gradually evolved the Openbook to first enable better external findability (making the HTML page for the first page image of every chapter include the first 10 and last 10 pages of OCRed ASCII text of the chapter, to produce a robustly indexable first chapter page), as well as exploring the boundaries of knowledge discovery and exploration, implementing “Related Titles” in 2001, the “Find More Like This Chapter” in 2002, “Chapter Skim” in 2003, “Search Builder” and “Reference Finder” in 2004, and “Active Skim” and enhanced “Search Builder” in 2005.

National Academies Press – Online pricing experiment

In 2003, the NAP published the results of an innovative online experiment to determine the “cannibalization effect” that might obtain, if the NAP gave all reports away online, in PDF format.

National Academies Press – Online pricing experiment

Developed as a Mellon-funded grant, and working with the University of Maryland Business School, the experiment interrupted buyers just before finalizing an online order, with an opportunity to acquire the work in PDF for a randomly generated discount: 50%, 10%, 100%, 70% off the list price, and if the answer was “no,” the NAP would offer one more step off the price.

National Academies Press – Online pricing experiment

The conclusion resulted in 42% of the customers, when interrupted when buying a print book online, would take the free PDF of the book

National Academies Press – Online pricing experiment

Interestingly, through mid-2006, as reported at the AAUP annual meeting, as a publisher the NAP remained financially self-sustaining — even while progressively expanding the utility of the online experience, and increasing its online traffic and dissemination. By mid-2009, the NAP’s site was still receiving 1.5 million unique visitors per month, while generating 35% of the NAP’s overall sales.

National Academies Press – Online pricing experiment

Multiple articles and presentations by Barbara Kline Pope, Executive Director of the NAP, and by Michael Jon Jensen, Director of Publishing Technologies for the NAP from 1998 through 2008, provide background on the evolving business strategies for “free in an environment of content abundance” that the National Academies Press continues to pursue.

Barricade

Barricade, from the French barrique (barrel), is any object or structure that creates a barrier or obstacle to control, block passage or force the flow of traffic in the desired direction. Adopted as a military term, a barricade denotes any improvised field fortification, most notably on the city streets during urban warfare.

Barricade

Barricades also include temporary traffic barricades designed with the goal of dissuading passage into a protected or hazardous area or large slabs of cement whose goal is to actively prevent forcible passage by a vehicle. Stripes on barricades and panel devices slope downward in the direction traffic must travel.

Barricade

There are also pedestrian barricades – sometimes called bike rack barricades for their resemblance to a now obsolete form of bicycle stand, or police barriers. They originated in France approximately 50 years ago and are now produced around the world. They were first produced in the U.S. 40 years ago by Friedrichs Mfg for New Orleans’s Mardi Gras parades.

Barricade

Finally anti-vehicle barriers and blast barriers are sturdy barricades that can respectively counter vehicle and bomb attacks. As of recent, movable blast barriers have been designed by NTU that can be used to protect humanitarian relief workers, and villagers and their homes in unsafe areas.

Barricade – In history

In actuality, although barricades came to widespread public awareness in that uprising (and in the equally momentous “Second Day of the Barricades” on 27 August 1648), none of several conflicting claims concerning who may have “invented” the barricade stand up to close scrutiny for the simple reason that Blaise de Monluc had already documented insurgents’ use of the technique at least as early as 1569 in religiously based conflicts in southwestern France.

Barricade – In history

Contrary to a number of historical sources, barricades were present in various incidents of the great French Revolution of 1789, but they never played a central role in those events

Barricade – In history

Barricade references appear in many colloquial expressions and are used, often metaphorically, in poems and songs celebrating radical social movements.

Barricade – Gallery

Among the materials frequently used for barricade construction are sandbags, pavement slabs and large vehicles

Barricade – Gallery

Improvised barricade built with vehicles

Barry Commoner – Career in academia

After serving as a lieutenant in the United States Navy during World War II, Commoner moved to St. Louis, Missouri, where he became a professor of plant physiology at Washington University. He taught there for 34 years and during this period, in 1966, he founded the Center for the Biology of Natural Systems to study “the science of the total environment”.

Barry Commoner – Career in academia

In the late 1950s, Commoner became well known for his opposition to nuclear weapons testing, becoming part of the team which conducted the Baby Tooth Survey, demonstrating the presence of Strontium 90 in children’s teeth as a direct result of nuclear fallout

Barry Commoner – Career in academia

In Poverty and Population, rapid population growth of the developing world is the result that standards have not been met. He argues that it is poverty that “initiates the rise in population” before leveling off, not the other way around. These developing countries were introduced to the standards, but were never able to fully adopt them, thus not allowing these countries to advance and limit their population growth.

Barry Commoner – Career in academia

Commoner continues to describe the reason why developing countries are still “forgotten” because of colonialism

Barry Commoner – Career in academia

“Thus colonialism involves a kind of demographic parasitism: the second population balancing phase of the demographic transition in the advanced country is fed by suppression of that same phase in the colony”

Barry Commoner – Career in academia

This is can also be seen in the study of India and contraceptives, in which family planning failed to reduce the birthrate because the people felt that “in order to advance their economic situation”, independent children were a necessity to gain better opportunities. The studies show that “population control in a country like India depends on the economically motivated desire to limit fertility”.

Barry Commoner – Career in academia

The solution presented in Commoner’s argument is that wealthier nations need to help the exploited or colonized countries develop and “achieve the level of welfare” that developed nations have

Barry Commoner – Career in academia

He feels that poverty is the main cause of the population crisis. If the reason behind overpopulation in poor nations is because of exploitation, than the only way to end it is to “redistribute [the wealth], among nations and within them”.

Barry Commoner – Career in academia

In his 1971 bestselling book The Closing Circle, Commoner suggested that the American economy should be restructured to conform to the unbending laws of ecology

Barry Commoner – Career in academia

Commoner published another bestseller in 1976, The Poverty of Power

Video content analysis – Academic research

Significant academic research into the field is ongoing at the LIVS, University of Calgary, University of Waterloo, University of Kingston, Georgia Institute of Technology, Carnegie Mellon University, West Virginia University, and The British Columbia Institute of Technology.

John Quiggin – Academic and professional career

From 1978 to 1983 Quiggin was a Research Economist and in 1986 was the Chief Research Economist with the Bureau of Agricultural Economics what was of the predecessor of the Australian Bureau of Agricultural and Resource Economics of the Australian Government Department of Agriculture, Fisheries and Forestry

John Quiggin – Academic and professional career

From 1989 to 1990 he was an Associate Professor in the Department of Agricultural and Resource Economics of the University of Maryland, College Park, a Fellow of the Research School of Social Sciences of Australian National University from 1991 to 1992, a Senior Fellow from 1993 to 1994 and a Professor in 1995 at the Centre for Economic Policy Research of the Australian National University in 1995

John Quiggin – Academic and professional career

He has been based at the University of Queensland since 2003, being an Australian Research Council Professorial Fellow and Federation Fellow and a Professor in the School of Economics and the School of Political Science and International Studies. He was an Adjunct Professor at the Australian National University from 2003 to 2006 and was the Hinkley Visiting Professor at Johns Hopkins University in 2011.

Seth MacFarlane – Cavalcade of Cartoon Comedy

On September 10, 2008, MacFarlane released a series of webisodes known as Seth MacFarlane’s Cavalcade of Cartoon Comedy with its animated shorts sponsored by Burger King and released weekly.

I’m Telling! – Pick-A-Prize Arcade

At the end of the game, the set was rotated 180 degrees to reveal the Pick-A-Prize Arcade. Before the round was played, the team was shown a collection of 20 prizes available in the arcade, 10 designated for each sibling. Prior to the show, each chose the six prizes he or she thought the other would most like to have. The brother’s prizes sat on yellow platforms while the sister’s sat on pink ones.

I’m Telling! – Pick-A-Prize Arcade

After the home audience was shown what her brother had chosen for her, she marked the six prizes she wanted by hitting a plunger next to each of them

University of East Anglia – Notable academics

See also Category:Academics of the University of East Anglia UEA has benefited from the services of academics at the top of their fields, including:

Lund University – The Academic Society

In 1830, Professor Carl Adolph Agardh formed Akademiska Föreningen (The Academic Society), commonly referred to as AF, with the goal of “developing and cultivating the academic life” by bringing students and faculty from all departments and student nations together in one organization

Academic Press

Country of origin United States

Academic Press

Headquarters location Waltham, Massachusetts

Academic Press

Academic Press is an academic book publisher. Originally independent, it was acquired by Harcourt, Brace & World in 1969. Reed Elsevier bought Harcourt in 2000, and Academic Press is now an imprint of Elsevier.

Academic Press

Academic Press publishes reference books, serials and online products in the subject areas of:

Academic Press

Well-known products include the Methods in Enzymology series and encyclopedias such as The International Encyclopedia of Public Health and the Encyclopedia of Neuroscience.

Michael J. Larsen – Academic biography

4 Selected publications

University of Cincinnati – Academic Internship Program

In 2010, the Division launched the Academic Internship Program, which provides access to opportunities for part-time internships to students.

University of Cincinnati – Academic profile

The University of Cincinnati aims to be the premier urban research university, and currently offers nearly 400 programs of study which include 62 Associate, 127 Baccalaureate, 125 Master’s, 78 Doctoral, and 3 First Professional (MD, JD, etc.) degrees. The university is divided into 14 colleges and schools.

Delft University of Technology – Royal Academy (1842–1864)

Royal Academy had its first building located at Oude Delft 95 Street in Delft

Richard Feynman – Early academic career

In 1945, he received a letter from Dean Mark Ingraham of the College of Letters and Science requesting his return to UW to teach in the coming academic year

Richard Feynman – Early academic career

After the war, Feynman declined an offer from the Institute for Advanced Study in Princeton, New Jersey, despite the presence there of such distinguished faculty members as Albert Einstein, Kurt Gödel and John von Neumann

Richard Feynman – Early academic career

Despite yet another offer from the Institute for Advanced Study, Feynman rejected the Institute on the grounds that there were no teaching duties: Feynman felt that students were a source of inspiration and teaching was a diversion during uncreative spells

Richard Feynman – Early academic career

Feynman has been called the “Great Explainer”

Richard Feynman – Early academic career

He opposed rote learning or unthinking memorization and other teaching methods that emphasized form over function. Clear thinking and clear presentation were fundamental prerequisites for his attention. It could be perilous even to approach him when unprepared, and he did not forget the fools or pretenders.

University of Amsterdam – Academics

The university is accredited by the Dutch Ministry of Education, Culture and Science, which grants accreditation to institutions who meet a national system of regulations and quality assurance controls. The Ministry has given it WO, or research university status. Dutch students must complete a six-year preparatory program to gain admission to national research universities. Only fifteen percent of students pass this preparatory program.

University of Amsterdam – Academics

In terms of tuition in 2011-2012, EU students are charged €1,713 per year for both Bachelor’s and Master’s programs and non-EU students are charged between €9,000-€11,000 per year for Bachelor’s programs and €10,500-€25,000 for Master’s and Doctoral programs

University of Amsterdam – Academics

The school’s academic year lasts from early September until mid-July and is divided into two 20-week semesters

University of Amsterdam – Academic Medical Center

In the southeastern Bijlmermeer neighborhood, the Faculty of Medicine is housed in the Academic Medical Center (AMC), the Faculty of Medicine’s teaching and research hospital. It was formed in 1983 when the UvA Faculty of Medicine and two hospitals, Binnengasthuis and the Wilhelmina Gasthuis, combined. Shortly after in 1988, the Emma Children’s Hospital also moved to the AMC. It is one of Amsterdam’s level 1 trauma centers and strongly cooperates with the VU University Medical Center (VUmc).

University of Amsterdam – Academic Center for Dentistry Amsterdam

The Faculty of Dentistry is located in the Academic Center for Dentistry Amsterdam (ACTA) in the southern Zuidas district on the campus of the VU University Medical Center. It was formed when the University of Amsterdam and the Vrije Universiteit combined their Dentistry schools in 1984.

University of California, Santa Cruz – Academics

The university offers 63 undergraduate majors and 35 minors, with graduate programs in 33 fields. Popular undergraduate majors include Art, Business Management Economics, Molecular and Cell Biology, and Psychology. Interdisciplinary programs, such as Feminist Studies, American Studies, Environmental Studies, Visual Studies, Digital Arts and New Media, and the unique History of Consciousness Department are also hosted alongside UCSC’s more traditional academic departments.

University of California, Santa Cruz – Academics

In an effort to cut $13 million, as required by the University of California office of the President and Board of Regents in a decision to cut 10% from the budget of each campus, UCSC nearly eliminated its longstanding and sometimes controversial undergraduate major Community Studies in 2009

Neonatology – Academic training

A neonatologist is a physician (MD or DO) practicing neonatology

Neonatology – Academic training

Neonatal Nurse Practitioners (NNPs) are advanced practice nurses that specialize in neonatal care. They are considered mid-level providers and often share the workload of NICU care with resident physicians. They are able to treat, plan, prescribe, diagnose and perform procedures within their scope of practice, defined by governing law and the hospital where they work.

Carnegie Tech – Academics

Carnegie Mellon’s College of Engineering offers undergraduate and graduate degrees in seven academic departments and two institutes.

Carnegie Tech – Academics

*Electrical and Computer Engineering|Department of Electrical and Computer Engineering

Carnegie Tech – Academics

*Engineering and Public Policy|Department of Engineering and Public Policy

Carnegie Tech – Academics

*Information Networking Institute

Alexander Bain – Academic career

In 1845 he was appointed Professor of Mathematics and Natural Philosophy at University of Strathclyde|Anderson’s University in Glasgow

Alexander Bain – Academic career

In 1860 he was appointed by the Monarchy of the United Kingdom|British Crown to the inaugural List_of_Professorships_at_the_University_of_Aberdeen#School_of_Divinity.2C_History_and_Philosophy|Regius Chair of Logic and the Regius Chair of English Literature at the University of Aberdeen, which was newly formed after the amalgamation of King’s College, Aberdeen and Marischal College by the Ancient universities of Scotland|Scottish Universities Commission of 1858.

AIBO – AIBOs in Education and Academia

AIBO’s were used extensively in education. For example, Carnegie Mellon offered an AIBO-centred robotics course covering models of perception, cognition, and action for solving problems.

David Berlinski – Academic career

Berlinski was a research assistant in molecular biology at Columbia University, and was a research fellow at the International Institute for Applied Systems Analysis (IIASA) in Austria and the Institut des Hautes Études Scientifiques (IHES) in France

Dion Forster – Publications: Academic Articles and Papers

* (adapted from Masters Thesis for the Bede Griffiths Trust 2003) – reworked and published as a book in 2007. Please see ‘Christ at the centre…’ below.

Dion Forster – Publications: Academic Articles and Papers

* To bomb or not to bomb? A Christian response to the war on Iraq (an article that was written for, and presented to, the Stellenbosch University Student Christian Association 2002)

Dion Forster – Publications: Academic Articles and Papers

* Spiritual Intelligence, the Ultimate Intelligence (paper presented at the Human Wellness Conference in Stellenbosch 2003)

Dion Forster – Publications: Academic Articles and Papers

* The Same-Sex Debate in the Methodist Church of Southern Africa (paper presented on behalf of the Doctrine Ethics and Worship Commission of the Methodist Church of Southern Africa to the Bishops of the Methodist Church of Southern Africa 2004).

Dion Forster – Publications: Academic Articles and Papers

* Christianity, inclusivity, and homosexuality: An interpretation of responses to the Methodist Church of Southern Africa’s discussion document on same sex relationships. (An elective paper presented for John Wesley College’s 10th Anniversary Conference in September 2004)

Dion Forster – Publications: Academic Articles and Papers

* Three empty promises: Understanding the role of the Church and Theological Education in the Methodist Church of Southern Africa. (presented at Duke Divinity School, Raleigh, North Carolina, and Garrett Evangelical Seminary, Chicago / Evanston Illinois – March / April 2005).

Dion Forster – Publications: Academic Articles and Papers

* . (Paper read at the Theological Society of South Africa Annual Conference June 2005).

Dion Forster – Publications: Academic Articles and Papers

* . (Published in ‘Grace and Truth’ – the Journal of Creative reflection of the Catholic Church of South Africa, July 2005).

Dion Forster – Publications: Academic Articles and Papers

* War: A case study in theological reflection (Video presentation at the 19th World Methodist Council in Seoul, South Korea – Theological Education committee).

Dion Forster – Publications: Academic Articles and Papers

* (Paper presented at the South African science and religion Forum – Published in the book The impact of knowledge systems on human development in Africa. du Toit, CW (ed), Pretoria, Research institute for Religion and Theology (University of South Africa) 2007:245-289). ISBN 978-1-86888-454-4.

Dion Forster – Publications: Academic Articles and Papers

* . Pretoria: Doctoral Dissertation, University of South Africa / UNISA.

Dion Forster – Publications: Academic Articles and Papers

* Forster, DA. 2007. Prepared for the Doctrine, Ethics and Worship Commission of the Methodist Church of Southern Africa.

Dion Forster – Publications: Academic Articles and Papers

* More red than green – a response to global warming and the environment from within the Methodist Church of Southern Africa. Forster, DA in The Epworth Review – the Journal of Methodist ecclesiology and mission Vol 35, No 2 (2008).

Dion Forster – Publications: Academic Articles and Papers

* Prophetic witness and social action as holiness in the Methodist Church of Southern Africa’s mission. (Article published in Studia Historiae Ecclesiasticae July 2008:411-434, VOL “IV, No.1).

Dion Forster – Publications: Academic Articles and Papers

* Hugh Price Hughes annual lecture delivered at , London, March 2009.

Dion Forster – Publications: Academic Articles and Papers

* Forster, DA in Lausanne World Pulse, April 2010. Lausanne Congress on World Evangelization. Wheaton Illinois.

Dion Forster – Publications: Academic Articles and Papers

* The Church has AIDS: Towards a positive theology for an HIV+ Church. Forster, DA in The Epworth Review – the Journal of Methodist ecclesiology and mission Vol 1, No 2, (May, 2010:6-24).

Dion Forster – Publications: Academic Articles and Papers

* Forster, DA in Lausanne World Pulse, June/July 2010. Lausanne Congress on World Evangelization. Wheaton Illinois.

Dion Forster – Publications: Academic Articles and Papers

* Forster, DA (2010) in HTS Teologiese Studies/Theological Studies, 66(1), Art. #731, 12 pages.

Dion Forster – Publications: Academic Articles and Papers

* African relational ontology, individual identity, and Christian theology: An African theological contribution towards an integrated relational ontological identity. Forster, DA in Theology (SPCK) VOL CXIII No 874 July/August (2010:243-253).

Elisabeth Kübler-Ross – Academic career

Kübler-Ross moved to New York in 1958 to work and continue her studies.

Elisabeth Kübler-Ross – Academic career

As she began her psychiatric residency, she was appalled by the hospital treatment of patients in the U.S. who were dying. She began giving a series of lectures featuring terminally ill patients, forcing medical students to face people who were dying.

Elisabeth Kübler-Ross – Academic career

In 1962 she accepted a position at the University of Colorado at Boulder|University of Colorado School of Medicine

Elisabeth Kübler-Ross – Academic career

Her extensive work with the dying led to the book On Death and Dying in 1969. In it, she proposed the now famous Five Stages of Grief as a pattern of adjustment: denial, anger, bargaining, depression, and acceptance. In general, individuals experience most of these stages when faced with their imminent death. The five stages have since been adopted by many as applying to the survivors of a loved one’s death, as well.

Herbert A. Simon – Academic career

From 1939 to 1942, Simon acted as director of a research group at the University of California, Berkeley

Herbert A. Simon – Academic career

In 1949, Simon became a professor of administration and chairman of the Department of Industrial Management at Carnegie Tech (later to become Carnegie Mellon University).Simon 1991 p. 136 He continued to teach in various departments at Carnegie Mellon, including psychology and computer science, until his death in 2001.

Herbert A. Simon – Academic career

From 1950 to 1955, Simon studied mathematical economics and during this time, together with David Hawkins (philosopher)|David Hawkins, discovered and proved the Hawkins–Simon theorem on the “conditions for the existence of positive solution vectors for input-output matrices

Herbert A. Simon – Academic career

Simon had a keen interest in the arts. He was a friend of Robert Lepper and Richard Rappaport and he influenced Lepper’s interest in the impact of machine on society. Rappaport also painted Simon’s commissioned portrait at Carnegie Mellon University.

Herbert A. Simon – Academic career

In January 2001, Simon underwent surgery at UPMC Presbyterian to remove a cancerous tumor in his abdomen. Although the surgery was successful, Simon later succumbed to the complications that followed.

For More Information, Visit:

store.theartofservice.com/itil-2011-foundation-complete-certification-kit-fourth-edition-study-guide-ebook-and-online-course.html

store.theartofservice.com/itil-2011-foundation-complete-certification-kit-fourth-edition-study-guide-ebook-and-online-course.html

Exciting Performance News!

Download (PPT, 1.18MB)


store.theartofservice.com/the-performance-toolkit.html

Performance

Bloomberg L.P. Company performance

In 2009, Bloomberg L.P. services accounted for a third of the $16 billion global financial data market. At this time, the company had sold 315,000 terminals worldwide. Moreover, the company brought in nearly $7 billion in annual revenue, with 85 percent coming from terminal sales. In 2010, Bloomberg L.P.’s market share stood at 30.3 percent, compared with 25.1 percent in 2005. In 2011, the company had 15,000 employees in 192 locations around the world.

Advanced Encryption Standard Performance

High speed and low RAM requirements were criteria of the AES selection process. Thus AES performs well on a wide variety of hardware, from 8-bit smart cards to high-performance computers.

Advanced Encryption Standard Performance

On a Pentium Pro, AES encryption requires 18 clock cycles per byte, equivalent to a throughput of about 11 MB/s for a 200 MHz processor. On a 1.7 GHz Pentium M throughput is about 60 MB/s.

Advanced Encryption Standard Performance

On Intel Core i3/i5/i7 CPUs supporting AES-NI instruction set extensions, throughput can be over 700 MB/s per thread.

Microsoft Office 2007 PerformancePoint Server 2007

Microsoft PerformancePoint Server allows users to monitor, analyze, and plan their business as well as drive alignment, accountability, and actionable insight across the entire organization. It includes features for scorecards, dashboards, reporting, analytics, budgeting and forecasting, among others.

.NET Framework Performance

The garbage collector, which is integrated into the environment, can introduce unanticipated delays of execution over which the developer has little direct control, and it can cause runtime memory size to be larger than expected.[according to whom?] “In large applications, the number of objects that the garbage collector needs to deal with can become very large, which means it can take a very long time to visit and rearrange all of them.”

.NET Framework Performance

The .NET Framework currently does not provide support for calling Streaming SIMD Extensions (SSE) via managed code

Explicit Congestion Notification Effects on performance

Since ECN is only effective in combination with an Active Queue Management (AQM) policy, the benefits of ECN depend on the precise AQM being used. A few observations, however, appear to hold across different AQMs.

Explicit Congestion Notification Effects on performance

As expected, ECN reduces the number of packets dropped by a TCP connection, which, by avoiding a retransmission, reduces latency and especially jitter. This effect is most drastic when the TCP connection has a single outstanding segment, when it is able to avoid an RTO timeout; this is often the case for interactive connections (such as remote logins) and transactional protocols (such as HTTP requests, the conversational phase of SMTP, or SQL requests).

Explicit Congestion Notification Effects on performance

Effects of ECN on bulk throughput are less clear because modern TCP are fairly good at resending dropped segments in a timely manner when the sender’s window is large.

Explicit Congestion Notification Effects on performance

Use of ECN has been found to be detrimental to performance on highly congested networks when using AQM algorithms that never drop packets. Modern AQM avoid this pitfall by dropping rather than marking packets at very high load.

Burroughs large systems Stack speed and performance

Some of the detractors of the B5000 architecture believed that stack architecture was inherently slow compared to register-based architectures

Burroughs large systems Stack speed and performance

Thus the designers of the current successors to the B5000 systems can optimize in whatever is the latest technique, and programmers do not have to adjust their code for it to run faster – they do not even need to recompile, thus protecting software investment. Some programs have been known to run for years over many processor upgrades. Such speed up is limited on register-based machines.

Burroughs large systems Stack speed and performance

Another point for speed as promoted by the RISC designers was that processor speed is considerably faster if everything is on a single chip. It was a valid point in the 1970s when more complex architectures such as the B5000 required too many transistors to fit on a single chip. However, this is not the case today and every B5000 successor machine now fits on a single chip as well as the performance support techniques such as caches and instruction pipelines.

Burroughs large systems Stack speed and performance

In fact, the A Series line of B5000 successors included the first single chip mainframe, the Micro-A of the late 1980s. This “mainframe” chip (named SCAMP for Single-Chip A-series Mainframe Processor) sat on an Intel-based plug-in PC board.

Comcast Financial performance

The book value of the company nearly doubled from $8.19 a share in 1999 to $15 a share in 2009. Revenues grew sixfold from 1999’s $6 billion to almost $36 billion in 2009. Net profit margin rose from 4.2% in 1999 to 8.4% in 2009, with operating margins improving 31 percent and return on equity doubling to 6.7 percent in the same time span. Between 1999 and 2009, return on capital nearly tripled to 7 percent.

Comcast Financial performance

Comcast reported first quarter 2012 profit increases of 30% due to increase in high-speed internet customers. In addition to a 7% rate increase on cable services in 2012, Comcast anticipates a double digit rate increase in 2013.

Java (programming language) Performance

Programs written in Java have a reputation for being slower and requiring more memory than those written in C++

Java (programming language) Performance

Some platforms offer direct hardware support for Java; there are microcontrollers that can run Java in hardware instead of a software Java virtual machine, and ARM based processors can have hardware support for executing Java bytecode through their Jazelle option.

Call stack Performance analysis

Taking regular-time samples of the call stack can be very useful in profiling the performance of programs. The reason is if a subroutine’s pointer appears on the call stack sampling data many times, It is likely a code bottleneck and should be inspected for performance problems.[dubious – discuss] See Performance analysis and Deep sampling.

Computer performance

Computer performance is characterized by the amount of useful work accomplished by a computer system compared to the time and resources used.

Computer performance

Depending on the context, good computer performance may involve one or more of the following:

Computer performance

High throughput (rate of processing work)

Computer performance

Low utilization of computing resource(s)

Computer performance

High availability of the computing system or application

Computer performance Performance metrics

Computer performance metrics include availability, response time, channel capacity, latency, completion time, service time, bandwidth, throughput, relative efficiency, scalability, performance per watt, compression ratio, instruction path length and speed up. CPU benchmarks are available.

Computer performance Aspect of software quality

Computer software performance, particularly software application response time, is an aspect of software quality that is important in human–computer interactions.

Computer performance Technical and non-technical definitions

The performance of any computer system can be evaluated in measurable, technical terms, using one or more of the metrics listed above. This way the performance can be

Computer performance Technical and non-technical definitions

– defined in absolute terms, e.g. for fulfilling a contractual obligation

Computer performance Technical and non-technical definitions

Whilst the above definition relates to a scientific, technical approach, the following definition given by Arnold Allen would be useful for a non-technical audience:

Computer performance Technical and non-technical definitions

The word performance in computer performance means the same thing that performance means in other contexts, that is, it means “How well is the computer doing the work it is supposed to do?”

Computer performance Technical performance metrics

There are a wide variety of technical performance metrics that indirectly affect overall computer performance.

Computer performance Technical performance metrics

Because there are too many programs to test a CPU’s speed on all of them, benchmarks were developed. The most famous benchmarks are the SPECint and SPECfp benchmarks developed by Standard Performance Evaluation Corporation and the ConsumerMark benchmark developed by the Embedded Microprocessor Benchmark Consortium EEMBC.

Computer performance Technical performance metrics

Some important measurements include:

Computer performance Technical performance metrics

Instructions per second – Most consumers pick a computer architecture (normally Intel IA32 architecture) to be able to run a large base of pre-existing, pre-compiled software. Being relatively uninformed on computer benchmarks, some of them pick a particular CPU based on operating frequency (see megahertz myth).

Computer performance Technical performance metrics

FLOPS – The number of floating-point operations per second is often important in selecting computers for scientific computations.

Computer performance Technical performance metrics

Performance per watt – System designers building parallel computers, such as Google, pick CPUs based on their speed per watt of power, because the cost of powering the CPU outweighs the cost of the CPU itself.

Computer performance Technical performance metrics

Some system designers building parallel computers pick CPUs based on the speed per dollar.

Computer performance Technical performance metrics

System designers building real-time computing systems want to guarantee worst-case response. That is easier to do when the CPU has low interrupt latency and when it has deterministic response. (DSP)

Computer performance Technical performance metrics

Computer programmers who program directly in assembly language want a CPU to support a full-featured instruction set.

Computer performance Technical performance metrics

Low power – For systems with limited power sources (e.g. solar, batteries, human power).

Computer performance Technical performance metrics

Environmental impact – Minimizing environmental impact of computers during manufacturing and recycling as well as during use. Reducing waste, reducing hazardous materials. (see Green computing).

Computer performance Technical performance metrics

Giga-updates per second – a measure of how frequently the RAM can be updated

Computer performance Technical performance metrics

However, sometimes pushing one technical performance metric to an extreme leads to a CPU with worse overall performance, because other important technical performance metrics were sacrificed to get one impressive-looking number—for example, the megahertz myth.

Computer performance Performance Equation

The total amount of time (t) required to execute a particular benchmark program is

Computer performance Performance Equation

, or equivalently

Computer performance Performance Equation

P = 1/t is “the performance” in terms of time-to-execute

Computer performance Performance Equation

N is the number of instructions actually executed (the instruction path length)

Computer performance Performance Equation

C= is the average cycles per instruction (CPI) for this benchmark.

Computer performance Performance Equation

I= is the average instructions per cycle (IPC) for this benchmark.

Computer performance Performance Equation

Even on one machine, a different compiler or the same compiler with different compiler optimization switches can change N and CPI—the benchmark executes faster if the new compiler can improve N or C without making the other worse, but often there is a trade-off between them—is it better, for example, to use a few complicated instructions that take a long time to execute, or to use instructions that execute very quickly, although it takes more of them to execute the benchmark?

Computer performance Performance Equation

For a given instruction set (and therefore fixed N) and semiconductor process, the maximum single-thread performance (1/t) requires a balance between brainiac techniques and speedracer techniques.

Computer monitor Measurements of performance

The performance of a monitor is measured by the following parameters:

Computer monitor Measurements of performance

Luminance is measured in candelas per square meter (cd/m2 also called a Nit).

Computer monitor Measurements of performance

Aspect ratio is the ratio of the horizontal length to the vertical length. Monitors usually have the aspect ratio 4:3, 5:4, 16:10 or 16:9.

Computer monitor Measurements of performance

Viewable image is usually measured diagonally, but the actual widths and heights are more informative since they are not affected by the aspect ratio in the same way. For CRTs, the viewable is typically 1 in (25 mm) smaller than the tube itself.

Computer monitor Measurements of performance

Display resolution is the number of distinct pixels in each dimension that can be displayed. Maximum resolution is limited by dot pitch.

Computer monitor Measurements of performance

Dot pitch is the distance between subpixels of the same color in millimeters. In general, the smaller the dot pitch, the sharper the picture will appear.

Computer monitor Measurements of performance

Refresh rate is the number of times in a second that a display is illuminated. Maximum refresh rate is limited by response time.

Computer monitor Measurements of performance

Response time is the time a pixel in a monitor takes to go from active (white) to inactive (black) and back to active (white) again, measured in milliseconds. Lower numbers mean faster transitions and therefore fewer visible image artifacts.

Computer monitor Measurements of performance

Contrast ratio is the ratio of the luminosity of the brightest color (white) to that of the darkest color (black) that the monitor is capable of producing.

Computer monitor Measurements of performance

Power consumption is measured in watts.

Computer monitor Measurements of performance

Delta-E: Color accuracy is measured in delta-E; the lower the delta-E, the more accurate the color representation. A delta-E of below 1 is imperceptible to the human eye. Delta-Es of 2 to 4 are considered good and require a sensitive eye to spot the difference.

Computer monitor Measurements of performance

Viewing angle is the maximum angle at which images on the monitor can be viewed, without excessive degradation to the image. It is measured in degrees horizontally and vertically.

Full text search Performance improvements

The deficiencies of free text searching have been addressed in two ways: By providing users with tools that enable them to express their search questions more precisely, and by developing new search algorithms that improve retrieval precision.

BT Group Financial performance

BT’s financial results have been as follows:

BT Group Financial performance

Year ending Turnover (£m) Profit/(loss) before tax (£m) Net profit/(loss) (£m) Basic eps (p)

Belarus Performances

The Belarusian government sponsors annual cultural festivals such as the Slavianski Bazaar in Vitebsk, which showcases Belarusian performers, artists, writers, musicians, and actors. Several state holidays, such as Independence Day and Victory Day, draw big crowds and often include displays such as fireworks and military parades, especially in Vitebsk and Minsk. The government’s Ministry of Culture finances events promoting Belarusian arts and culture both inside and outside the country.

Defragmentation User and performance issues

Improvements in modern Hard Drives such as RAM cache, faster platter rotation speed, command queuing (SCSI TCQ/SATA NCQ), and greater data density reduce the negative impact of fragmentation on system performance to some degree, though increases in commonly used data quantities offset those benefits

Defragmentation User and performance issues

When reading data from a conventional electromechanical hard disk drive, the disk controller must first position the head, relatively slowly, to the track where a given fragment resides, and then wait while the disk platter rotates until the fragment reaches the head.

Defragmentation User and performance issues

Since disks based on flash memory have no moving parts, random access of a fragment does not suffer this delay, making defragmentation to optimize access speed unnecessary. Furthermore, since flash memory can be written to only a limited number of times before it fails, defragmentation is actually detrimental (except in the mitigation of catastrophic failure).

Apple DOS Performance improvements and other versions

This was called “blowing a rev” and was a well-understood performance bottleneck in disk systems

Apple DOS Performance improvements and other versions

When reading and decoding sector 0, then, sector 8 would pass by, so that sector 1, the next sector likely to be needed, would be available without waiting

Apple DOS Performance improvements and other versions

Unfortunately, the DOS file manager subverted this efficiency by copying bytes read from or written to a file one at a time between the RWTS buffer and main memory, requiring more time and resulting in DOS constantly blowing revs when reading or writing files

Apple DOS Performance improvements and other versions

This functionality soon appeared in commercial products, such as Pronto-DOS, Diversi-DOS, and David-DOS, along with additional features, but was never used in an official Apple DOS release

Leadership Performance

To facilitate successful performance it is important to understand and accurately measure leadership performance.

Leadership Performance

For instance, leadership performance may be used to refer to the career success of the individual leader, performance of the group or organization, or even leader emergence

Supplier relationship management SRM and supplier performance management

Some confusion may exist over the difference between supplier performance management (SPM) and SRM

Cloud computing Performance interference and noisy neighbors

This has also led to difficulties in comparing various cloud providers on cost and performance using traditional benchmarks for service and application performance, as the time period and location in which the benchmark is performed can result in widely varied results.

Productivity Production performance

The performance of production measures production’s ability to generate income

Productivity Production performance

When we want to maximize the production performance we have to maximize the income generated by the production function.

Productivity Production performance

The production performance can be measured as a relative or an absolute income. Expressing performance both in relative (rel.) and absolute (abs.) quantities is helpful for understanding the welfare effects of production. For measurement of the relative production performance, we use the known productivity ratio

Productivity Production performance

Real output / Real input.

Productivity Production performance

The absolute income of performance is obtained by subtracting the real input from the real output as follows:

Productivity Production performance

Real income (abs.) = Real output – Real input

Productivity Production performance

The growth of the real income is the increase of the economic value which can be distributed between the production stakeholders. With the aid of the production model we can perform the relative and absolute accounting in one calculation. Maximizing production performance requires using the absolute measure, i.e. the real income and its derivatives as a criterion of production performance.

Productivity Production performance

The maximum for production performance is the maximum of the real incomes

Productivity Production performance

Figure above is a somewhat exaggerated depiction because the whole production function is shown

Productivity Production performance

Therefore a correct interpretation of a performance change is obtained only by measuring the real income change.

Salesforce.com Sales Performance Accelerator

Salesforce.com is launching a new product called Sales Performance Accelerator. It combines the CRM with the Work.com performance management application as well as customer lead information from Data.com.

Apache HTTP Server Performance

Where compromises in performance need to be made, the design of Apache is to reduce latency and increase throughput, relative to simply handling more requests, thus ensuring consistent and reliable processing of requests within reasonable time-frames.

Apache HTTP Server Performance

This architecture, and the way it was implemented in the Apache 2.4 series, provides for performance equivalent or slightly better than event-based webservers.

C++11 Core language runtime performance enhancements

These language features primarily exist to provide some kind of performance benefit, either of memory or of computational speed.

Proxy server Performance Enhancing Proxies

A proxy that is designed to mitigate specific link related issues or degradations. PEPs (Performance Enhancing Proxies) are typically used to improve TCP performance in the presence of high Round Trip Times (RTTs) and wireless links with high packet loss. They are also frequently used for highly asynchronous links featuring very different upload and download rates.

Dial-up Internet access Performance

Modern dial-up modems typically have a maximum theoretical transfer speed of 56 kbit/s (using the V.90 or V.92 protocol), although in most cases 40–50 kbit/s is the norm. Factors such as phone line noise as well as the quality of the modem itself play a large part in determining connection speeds.

Dial-up Internet access Performance

Some connections may be as low as 20 kbit/s in extremely “noisy” environments, such as in a hotel room where the phone line is shared with many extensions, or in a rural area, many miles from the phone exchange. Other things such as long loops, loading coils, pair gain, electric fences (usually in rural locations), and digital loop carriers can also cripple connections to 20 kbit/s or lower.

Dial-up Internet access Performance

Dial-up connections usually have latency as high as 300 ms or even more; this is longer than for many forms of broadband, such as cable or DSL, but typically less than satellite connections. Longer latency can make online gaming or video conferencing difficult, if not impossible.

Dial-up Internet access Performance

Many modern video games do not even include the option to use dial-up. However, some games such as Everquest, Red Faction, Warcraft 3, Final Fantasy XI, Phantasy Star Online, Guild Wars, Unreal Tournament, Halo: Combat Evolved, Audition, Quake 3: Arena, and Ragnarok Online, are capable of running on 56k dial-up.

Dial-up Internet access Performance

An increasing amount of Internet content such as streaming media will not work at dial-up speeds.

Dial-up Internet access Performance

Analog telephone lines are digitally switched and transported inside a Digital Signal 0 once reaching the telephone company’s equipment. Digital Signal 0 is 64 kbit/s; therefore a 56 kbit/s connection is the highest that will ever be possible with analog phone lines.

Honeywell Performance Materials and Technologies

Andreas Kramvis is the current President and CEO of the Performance Materials and Technologies division.

General Electric Performance evaluations

In performance evaluations, GE executives focus on one’s ability to balance risk and return and deliver long-term results for shareowners.

Database Performance, security, and availability

Because of the critical importance of database technology to the smooth running of an enterprise, database systems include complex mechanisms to deliver the required performance, security, and availability, and allow database administrators to control the use of these features.

Control chart Performance of control charts

When a point falls outside of the limits established for a given control chart, those responsible for the underlying process are expected to determine whether a special cause has occurred

Control chart Performance of control charts

It is known that even when a process is in control (that is, no special causes are present in the system), there is approximately a 0.27% probability of a point exceeding 3-sigma control limits

Control chart Performance of control charts

Meanwhile, if a special cause does occur, it may not be of sufficient magnitude for the chart to produce an immediate alarm condition. If a special cause occurs, one can describe that cause by measuring the change in the mean and/or variance of the process in question. When those changes are quantified, it is possible to determine the out-of-control ARL for the chart.

Control chart Performance of control charts

It turns out that Shewhart charts are quite good at detecting large changes in the process mean or variance, as their out-of-control ARLs are fairly short in these cases

Control chart Performance of control charts

Most control charts work best for numeric data with Gaussian assumptions. The real-time contrasts chart was proposed to monitor process with complex characteristics, e.g. high-dimensional, mix numerical and categorical, missing-valued, non-Gaussian, non-linear relationship.

Audit – Performance audits

Safety, security, information systems performance, and environmental concerns are increasingly the subject of audits. There are now audit professionals who specialize in security audits and information systems audits. With nonprofit organizations and government agencies, there has been an increasing need for performance audits, examining their success in satisfying mission objectives.

Collaborative method – Performance analysis

Working group: a group where no performance need or opportunity exists that requires a team. Members interact to share information but have specific areas of responsibility and little mutual accountability.

Collaborative method – Performance analysis

Pseudo-team: a group where there could be an existing performance need or opportunity that requires a team but there has not been a focus on collective performance. Interactions between members detract from each individual’s contribution.

Collaborative method – Performance analysis

Potential team: a group where a significant performance need exists and attempts are being made to improve performance. This group typically requires more clarity about purpose, goals or outcomes and needs more discipline.

Collaborative method – Performance analysis

Real team: a group with complementary skills, equal commitment and is mutually accountable.

Collaborative method – Performance analysis

Extraordinary team: a real team that also has a deep commitment for one another’s personal growth and success.

Computer architecture – Performance

Modern computer performance is often described in MIPS per MHz (millions of instructions per millions of cycles of clock speed)

Computer architecture – Performance

Counting machine language instructions would be misleading because they can do varying amounts of work in different ISAs. The “instruction” in the standard measurements is not a count of the ISA’s actual machine language instructions, but a historical unit of measurement, usually based on the speed of the VAX computer architecture.

Computer architecture – Performance

Historically, many people measured a computer’s speed by the clock rate (usually in MHz or GHz). This refers to the cycles per second of the main clock of the CPU. However, this metric is somewhat misleading, as a machine with a higher clock rate may not necessarily have higher performance. As a result manufacturers have moved away from clock speed as a measure of performance.

Computer architecture – Performance

Other factors influence speed, such as the mix of functional units, bus speeds, available memory, and the type and order of instructions in the programs being run.

Computer architecture – Performance

In a typical home computer, the simplest, most reliable way to speed performance is usually to add random access memory (RAM). More RAM increases the likelihood that needed data or a program is in RAM—so the system is less likely to need to move memory data from the disk. The disk is often ten thousand times slower than RAM because it has mechanical parts that must move to access its data.

Computer architecture – Performance

There are two main types of speed, latency and throughput. Latency is the time between the start of a process and its completion. Throughput is the amount of work done per unit time. Interrupt latency is the guaranteed maximum response time of the system to an electronic event (e.g. when the disk drive finishes moving some data).

Computer architecture – Performance

Performance is affected by a very wide range of design choices — for example, pipelining a processor usually makes latency worse (slower) but makes throughput better. Computers that control machinery usually need low interrupt latencies. These computers operate in a real-time environment and fail if an operation is not completed in a specified amount of time. For example, computer-controlled anti-lock brakes must begin braking within a predictable, short time after the brake pedal is sensed.

Computer architecture – Performance

The performance of a computer can be measured using other metrics, depending upon its application domain. A system may be CPU bound (as in numerical calculation), I/O bound (as in a webserving application) or memory bound (as in video editing). has become important in servers and portable devices like laptops.

Computer architecture – Performance

Benchmarking tries to take all these factors into account by measuring the time a computer takes to run through a series of test programs

Central processing unit – Performance

Because of these problems, various standardized tests, often called “benchmarks” for this purpose—such as SPECint – have been developed to attempt to measure the real effective performance in commonly used applications.

Central processing unit – Performance

In practice, however, the performance gain is far less, only about 50%, due to imperfect software algorithms and implementation

Interrupt – Performance issues

Interrupts provide low overhead and good latency at low load, but degrade significantly at high interrupt rate unless care is taken to prevent several pathologies

Free and open-source graphics device driver – Performance Comparison

A widely known source for performance information is the free3d.org site, which collects 3D performance information—specifically glxgears frame rates—submitted by users. On the basis of what it concedes is an inadequate benchmark, the site currently lists ATI’s Radeon HD 4670 as recommended for “best 3D performance.” Additionally, Phoronix routinely runs benchmarks comparing free driver performance.

Free and open-source graphics device driver – Performance Comparison

A comparison from April 29, 2013 between the FOSS and the proprietary drivers on both AMD and Nvidia is found here: Phoronix

Computer data storage – Performance

The time it takes to access a particular location in storage. The relevant unit of measurement is typically nanosecond for primary storage, millisecond for secondary storage, and second for tertiary storage. It may make sense to separate read latency and write latency, and in case of sequential access storage, minimum, maximum and average latency.

Computer data storage – Performance

The rate at which information can be read from or written to the storage. In computer data storage, throughput is usually expressed in terms of megabytes per second or MB/s, though bit rate may also be used. As with latency, read rate and write rate may need to be differentiated. Also accessing media sequentially, as opposed to randomly, typically yields maximum throughput.

Computer data storage – Performance

The size of the largest “chunk” of data that can be efficiently accessed as a single unit, e.g. without introducing more latency.

Computer data storage – Performance

The probability of spontaneous bit value change under various conditions, or overall failure rate

Criticism of Linux – Kernel performance

At LinuxCon 2009, Linux creator Linus Torvalds said that the Linux kernel has become “bloated and huge”:

Criticism of Linux – Kernel performance

We’re getting bloated and huge. Yes, it’s a problem … Uh, I’d love to say we have a plan … I mean, sometimes it’s a bit sad that we are definitely not the streamlined, small, hyper-efficient kernel that I envisioned 15 years ago … The kernel is huge and bloated, and our icache footprint is scary. I mean, there is no question about that. And whenever we add a new feature, it only gets worse.

Intel 8086 – Performance

Combined with orthogonalizations of operations versus operand-types and addressing modes, as well as other enhancements, this made the performance gain over the 8080 or 8085 fairly significant, despite cases where the older chips may be faster .

Intel 8086 – Performance

Execution times for typical instructions (in clock cycles)

Intel 8086 – Performance

instruction register-register register immediate register-memory memory-register memory-immediate

Intel 8086 – Performance

jump register => 11 ; label => 15 ; condition,label => 16

Intel 8086 – Performance

integer multiply 70~160 (depending on operand data as well as size) including any EA

Intel 8086 – Performance

integer divide 80~190 (depending on operand data as well as size) including any EA

Intel 8086 – Performance

EA = time to compute effective address, ranging from 5 to 12 cycles.

Intel 8086 – Performance

Timings are best case, depending on prefetch status, instruction alignment, and other factors.

Intel 8086 – Performance

As can be seen from these tables, operations on registers and immediates were fast (between 2 and 4 cycles), while memory-operand instructions and jumps were quite slow; jumps took more cycles than on the simple 8080 and 8085, and the 8088 (used in the IBM PC) was additionally hampered by its narrower bus. The reasons why most memory related instructions were slow were threefold:

Intel 8086 – Performance

Loosely coupled fetch and execution units are efficient for instruction prefetch, but not for jumps and random data access (without special measures).

Intel 8086 – Performance

No dedicated address calculation adder was afforded; the microcode routines had to use the main ALU for this (although there was a dedicated segment + offset adder).

Intel 8086 – Performance

The address and data buses were multiplexed, forcing a slightly longer (33~50%) bus cycle than in typical contemporary 8-bit processors.

Intel 8086 – Performance

However, memory access performance was drastically enhanced with Intel’s next generation chips. The 80186 and 80286 both had dedicated address calculation hardware, saving many cycles, and the 80286 also had separate (non-multiplexed) address and data buses.

Linux desktop environments – Performance

The performance of Linux on the desktop has been a controversial topic, with at least one Linux kernel developer, Con Kolivas, accusing the Linux community of favouring performance on servers. He quit Linux development because he was frustrated with this lack of focus on the desktop, and then gave a ‘tell all’ interview on the topic.

Linux desktop environments – Performance

Other sources, such as mainstream press The Economist disagree with this assessment that there has not been enough focus on desktop Linux, saying in December 2007:

Linux desktop environments – Performance

…Linux has swiftly become popular in small businesses and the home…That’s largely the doing of Gutsy Gibbon, the code-name for the Ubuntu 7.10 from Canonical. Along with distributions such as Linspire, Mint, Xandros, OpenSUSE and gOS, Ubuntu (and its siblings Kubuntu, Edubuntu and Xubuntu) has smoothed most of Linux’s geeky edges while polishing it for the desktop…It’s now simpler to set up and configure than Windows.

Microkernel – Performance

On most mainstream processors, obtaining a service is inherently more expensive in a microkernel-based system than a monolithic system

Microkernel – Performance

L4’s IPC performance is still unbeaten across a range of architectures.

Microkernel – Performance

While these results demonstrate that the poor performance of systems based on first-generation microkernels is not representative for second-generation kernels such as L4, this constitutes no proof that microkernel-based systems can be built with good performance

Microkernel – Performance

An attempt to build a high-performance multiserver Operating System was the IBM Sawmill Linux project

Microkernel – Performance

It has been shown in the meantime that user-level device drivers can come close to the performance of in-kernel drivers even for such high-throughput, high-interrupt devices as Gigabit Ethernet. This seems to imply that high-performance multi-server systems are possible.

MTV Video Music Award – Performances

1984 Rod Stewart, Madonna, Huey Lewis and the News, David Bowie, Tina Turner, ZZ Top, Ray Parker, Jr.

MTV Video Music Award – Performances

1985 Eurythmics, David Ruffin & Eddie Kendrick & Hall & Oates, Tears for Fears, John Cougar Mellencamp, Pat Benatar, Sting, Eddie Murphy

MTV Video Music Award – Performances

1986 Robert Palmer, The Hooters, The Monkees, ‘Til Tuesday, INXS, Van Halen, Mr. Mister, Simply Red, Whitney Houston, Pet Shop Boys, Tina Turner, Genesis

MTV Video Music Award – Performances

1987 Los Lobos, Bryan Adams, The Bangles, Bon Jovi, Crowded House, Madonna, Whitesnake, Whitney Houston, The Cars, David Bowie, Prince, Cyndi Lauper, Run-D.M.C. (feat. Steven Tyler & Joe Perry)

MTV Video Music Award – Performances

1988 Rod Stewart, Jody Watley, Aerosmith, Elton John, Depeche Mode, Crowded House, Michael Jackson, Cher, The Fat Boys (feat. Chubby Checker), Guns N’ Roses, INXS

MTV Video Music Award – Performances

1989 Madonna, Bobby Brown, Def Leppard, Tone-Loc, The Cult, Paula Abdul, Jon Bon Jovi & Richie Sambora, The Cure, Cher, The Rolling Stones, Axl Rose & Tom Petty and the Heartbreakers

MTV Video Music Award – Performances

1991 Van Halen, C+C Music Factory, Poison, Mariah Carey, EMF, Paula Abdul, Queensrÿche, LL Cool J, Metallica, Don Henley, Guns N’ Roses, Prince and The New Power Generation

MTV Video Music Award – Performances

1992 The Black Crowes, Bobby Brown, U2 & Dana Carvey, Def Leppard, Nirvana, Elton John, Pearl Jam, Red Hot Chili Peppers, Michael Jackson, Bryan Adams, En Vogue, Eric Clapton, Guns N’ Roses & Elton John

MTV Video Music Award – Performances

1993 Madonna, Lenny Kravitz (feat. John Paul Jones), Sting, Soul Asylum & Peter Buck & Victoria Williams, Aerosmith, Naughty By Nature, R.E.M., Spin Doctors, Pearl Jam, The Edge, Janet Jackson

MTV Video Music Award – Performances

1994 Aerosmith, Boyz II Men, The Smashing Pumpkins, The Rolling Stones, Green Day, Beastie Boys, Alexandrov Red Army Ensemble & Leningrad Cowboys, Salt-n-Pepa, Tom Petty and the Heartbreakers, Snoop Doggy Dogg, Stone Temple Pilots, Bruce Springsteen

MTV Video Music Award – Performances

1996 The Smashing Pumpkins, The Fugees (feat. Nas), Metallica, LL Cool J, Neil Young, Hootie & the Blowfish, Alanis Morissette, Bush, The Cranberries, Oasis, Bone Thugs-N-Harmony, Kiss

MTV Video Music Award – Performances

1997 Puff Daddy (feat. Faith Evans, 112, Mase & Sting), Jewel, The Prodigy, The Wallflowers (feat. Bruce Springsteen), Lil’ Kim & Da Brat & Missy Elliott & Lisa “Left-Eye” Lopes & Angie Martinez, U2, Beck, Spice Girls, Jamiroquai, Marilyn Manson

MTV Video Music Award – Performances

1998 Madonna, Pras (feat. Ol’ Dirty Bastard, Mýa, Wyclef Jean & Canibus), Hole, Master P (feat. Silkk Tha Shocker, Mystikal & Mia X), Backstreet Boys, Beastie Boys, Brandy & Monica, Dave Matthews Band, Marilyn Manson, Brian Setzer Orchestra

MTV Video Music Award – Performances

1999 Kid Rock (feat. Run-DMC, Steven Tyler, Joe Perry & Joe C.), Lauryn Hill, Backstreet Boys, Ricky Martin, Nine Inch Nails, TLC, Fatboy Slim, Jay-Z (feat. DJ Clue & Amil), Britney Spears & ‘N Sync, Eminem & Dr. Dre & Snoop Dogg

MTV Video Music Award – Performances

2000 Janet Jackson, Rage Against the Machine, Sisqo (feat. Dru Hill), Britney Spears, Eminem, Red Hot Chili Peppers, ‘N Sync, Nelly, Christina Aguilera (feat. Fred Durst), Blink-182

MTV Video Music Award – Performances

2001 Jennifer Lopez (feat. Ja Rule), Linkin Park & The X-Ecutioners, Alicia Keys, ‘N Sync (feat. Michael Jackson), Daphne Aguilera, Jay-Z, Staind, Missy Elliott (feat. Nelly Furtado, Ludacris & Trina), U2, Britney Spears

MTV Video Music Award – Performances

2002 Bruce Springsteen & the E Street Band, Pink, Ja Rule & Ashanti & Nas, Shakira, Eminem, P. Diddy (feat. Busta Rhymes, Ginuwine, Pharrell & Usher), Sheryl Crow, The Hives, The Vines, Justin Timberlake (feat. Clipse), Guns N’ Roses

MTV Video Music Award – Performances

2003 Madonna (feat. Britney Spears, Christina Aguilera & Missy Elliott), Good Charlotte, Christina Aguilera (feat. Redman & Dave Navarro), 50 Cent (feat. Snoop Dogg), Mary J. Blige (feat. Method Man & 50 Cent), Coldplay, Beyoncé (feat. Jay-Z), Metallica

MTV Video Music Award – Performances

2004 Usher, Jet, Hoobastank, Yellowcard, Kanye West (feat. Chaka Khan & Syleena Johnson), Lil Jon & The East Side Boyz, Ying Yang Twins, Petey Pablo, Terror Squad (feat. Fat Joe), Jessica Simpson, Nelly (feat. Christina Aguilera), Alicia Keys (feat. Lenny Kravitz & Stevie Wonder), The Polyphonic Spree, OutKast

MTV Video Music Award – Performances

2005 Green Day, Ludacris (feat. Bobby Valentino), MC Hammer, Shakira (feat. Alejandro Sanz), R. Kelly, The Killers, P. Diddy & Snoop Dogg, Don Omar, Tego Calderón, Daddy Yankee, Coldplay, Kanye West (feat. Jamie Foxx), Mariah Carey (feat. Jadakiss & Jermaine Dupri), 50 Cent (feat. Mobb Deep & Tony Yayo), My Chemical Romance, Kelly Clarkson

MTV Video Music Award – Performances

2006 Justin Timberlake (feat. Timbaland), The Raconteurs, Shakira & Wyclef Jean, Ludacris (feat. Pharrell & Pussycat Dolls), OK Go, The All-American Rejects, Beyoncé, T.I. (feat. Young Dro), Panic! at the Disco, Busta Rhymes, Missy Elliott, Christina Aguilera, Tenacious D, The Killers

MTV Video Music Award – Performances

2007 Britney Spears, Chris Brown (feat. Rihanna), Linkin Park, Alicia Keys, Timbaland (feat. Nelly Furtado, Sebastian, Keri Hilson & Justin Timberlake)

MTV Video Music Award – Performances

2008 Rihanna, Jonas Brothers, Lil Wayne (feat. Leona Lewis & T-Pain), Paramore, Pink, T.I. (feat. Rihanna), Christina Aguilera, Kanye West, Katy Perry, Kid Rock (feat. Lil Wayne), The Ting Tings, LL Cool J, Lupe Fiasco

MTV Video Music Award – Performances

2009 Janet Jackson & This Is It back-up dancers, Katy Perry & Joe Perry, Taylor Swift, Lady Gaga, Green Day, Beyoncé, Muse, Pink, Jay-Z & Alicia Keys

MTV Video Music Award – Performances

2010 Eminem (feat. Rihanna), Justin Bieber, Usher, Florence and the Machine, Taylor Swift, Drake (feat. Mary J. Blige & Swizz Beatz), B.o.B & Paramore (feat. Bruno Mars), Linkin Park, Kanye West

MTV Video Music Award – Performances

2011 Lady Gaga (feat. Brian May), Jay-Z & Kanye West, Pitbull (feat. Ne-Yo & Nayer), Adele, Chris Brown, Beyoncé, Young the Giant, Bruno Mars, Lil Wayne

MTV Video Music Award – Performances

2013 Lady Gaga, Miley Cyrus & Robin Thicke & 2 Chainz & Kendrick Lamar, Kanye West, Justin Timberlake & ‘N Sync, Macklemore & Ryan Lewis (feat. Mary Lambert & Jennifer Hudson), Drake, Bruno Mars, Katy Perry

DEC Alpha – Performance

Perhaps the most obvious trend is that while Intel could always get reasonably close to Alpha in integer performance, in floating point performance the difference was considerable

DEC Alpha – Performance

System CPU MHz integer floating point

DEC Alpha – Performance

SPEC Benchmark 1995 Performance comparison (using SPECint95 and SPECfp95 Result )

DEC Alpha – Performance

Intel Alder System (200 MHz, 256KB L2) Pentium Pro 200 8.9 6.75

DEC Alpha – Performance

2000 Performance comparison (using SPECint95 and SPECfp95 Result)

DEC Alpha – Performance

Intel VC820 motherboard Pentium III 1000 46.8 31.9

Emacs – Performance

Modern computers are powerful enough to run GNU Emacs very quickly, although its performance still lags when handling large files on 32-bit systems

Mach (kernel) – Performance problems

Mach was originally intended to be a replacement for classical monolithic UNIX, and for this reason contained many UNIX-like ideas

Mach (kernel) – Performance problems

Some of Mach’s more esoteric features were also based on this same IPC mechanism

Mach (kernel) – Performance problems

Unfortunately, the use of IPC for almost all tasks turned out to have serious performance impact. Benchmarks on 1997 hardware showed that Mach 3.0-based UNIX single-server implementations were about 50% slower than native UNIX.

Mach (kernel) – Performance problems

Studies showed the vast majority of this performance hit, 73% by one measure, was due to the overhead of the IPC. And this was measured on a system with a single large server providing the operating system; breaking the operating system down further into smaller servers would only make the problem worse. It appeared the goal of a collection-of-servers was simply not possible.

Mach (kernel) – Performance problems

Many attempts were made to improve the performance of Mach and Mach-like microkernels, but by the mid-1990s much of the early intense interest had died. The concept of an operating system based on IPC appeared to be dead, the idea itself flawed.

Mach (kernel) – Performance problems

In fact, further study of the exact nature of the performance problems turned up a number of interesting facts

Mach (kernel) – Performance problems

When Mach 3 attempted to move most of the operating system into user-space, the overhead became higher still: benchmarks between Mach and Ultrix on a MIPS R3000 showed a performance hit as great as 67% on some workloads.

Mach (kernel) – Performance problems

For example, getting the system time involves an IPC call to the user-space server maintaining system clock

Mach (kernel) – Performance problems

Instead they had to use a single one-size-fits-all solution that added to the performance problems

Mach (kernel) – Performance problems

Other performance problems were related to Mach’s support for multiprocessor systems. From the mid-1980s to the early 1990s, commodity CPUs grew in performance at a rate of about 60% a year, but the speed of memory access grew at only 7% a year. This meant that the cost of accessing memory grew tremendously over this period, and since Mach was based on mapping memory around between programs, any “cache miss” made IPC calls slow.

Mach (kernel) – Performance problems

Regardless of the advantages of the Mach approach, these sorts of real-world performance hits were simply not acceptable. As other teams found the same sorts of results, the early Mach enthusiasm quickly disappeared. After a short time many in the development community seemed to conclude that the entire concept of using IPC as the basis of an operating system was inherently flawed.

Kernel (computing) – Performance

Monolithic kernels are designed to have all of their code in the same address space (kernel space), which some developers argue is necessary to increase the performance of the system. Some developers also maintain that monolithic systems are extremely efficient if well-written. The monolithic model tends to be more efficient through the use of shared kernel memory, rather than the slower IPC system of microkernel designs, which is typically based on message passing.

Kernel (computing) – Performance

Studies that empirically measured the performance of these microkernels did not analyze the reasons of such inefficiency

Kernel (computing) – Performance

In fact, as guessed in 1995, the reasons for the poor performance of microkernels might as well have been: (1) an actual inefficiency of the whole microkernel approach, (2) the particular concepts implemented in those microkernels, and (3) the particular implementation of those concepts. Therefore it remained to be studied if the solution to build an efficient microkernel was, unlike previous attempts, to apply the correct construction techniques.

Kernel (computing) – Performance

On the other end, the hierarchical protection domains architecture that leads to the design of a monolithic kernel has a significant performance drawback each time there’s an interaction between different levels of protection (i.e. when a process has to manipulate a data structure both in ‘user mode’ and ‘supervisor mode’), since this requires message copying by value.

Kernel (computing) – Performance

By the mid-1990s, most researchers had abandoned the belief that careful tuning could reduce this overhead dramatically, but recently, newer microkernels, optimized for performance, such as L4 and K42 have addressed these problems.[verification needed]

Extract, transform, load – Performance

ETL vendors benchmark their record-systems at multiple TB (terabytes) per hour (or ~1 GB per second) using powerful servers with multiple CPUs, multiple Hard Drives, multiple gigabit-network connections, and lots of memory. The fastest ETL record is currently held by Syncsort, Vertica and HP at 5.4TB in under an hour, which is more than twice as fast as the earlier record held by Microsoft and Unisys.

Extract, transform, load – Performance

In real life, the slowest part of an ETL process usually occurs in the database load phase. Databases may perform slowly because they have to take care of concurrency, integrity maintenance, and indices. Thus, for better performance, it may make sense to employ:

Extract, transform, load – Performance

Direct Path Extract method or bulk unload whenever is possible (instead of querying the database) to reduce the load on source system while getting high speed extract

Extract, transform, load – Performance

most of the transformation processing outside of the database

Extract, transform, load – Performance

bulk load operations whenever possible.

Extract, transform, load – Performance

Still, even using bulk operations, database access is usually the bottleneck in the ETL process. Some common methods used to increase performance are:

Extract, transform, load – Performance

Partition tables (and indices). Try to keep partitions similar in size (watch for null values that can skew the partitioning).

Extract, transform, load – Performance

Do all validation in the ETL layer before the load. Disable integrity checking (disable constraint …) in the target database tables during the load.

Extract, transform, load – Performance

Disable triggers (disable trigger …) in the target database tables during the load. Simulate their effect as a separate step.

Extract, transform, load – Performance

Use parallel bulk load when possible — works well when the table is partitioned or there are no indices. Note: attempt to do parallel loads into the same table (partition) usually causes locks — if not on the data rows, then on indices.

Extract, transform, load – Performance

If a requirement exists to do insertions, updates, or deletions, find out which rows should be processed in which way in the ETL layer, and then process these three operations in the database separately. You often can do bulk load for inserts, but updates and deletes commonly go through an API (using SQL).

Extract, transform, load – Performance

Whether to do certain operations in the database or outside may involve a trade-off. For example, removing duplicates using distinct may be slow in the database; thus, it makes sense to do it outside. On the other side, if using distinct significantly (x100) decreases the number of rows to be extracted, then it makes sense to remove duplications as early as possible in the database before unloading data.

Extract, transform, load – Performance

A common source of problems in ETL is a big number of dependencies among ETL jobs. For example, job “B” cannot start while job “A” is not finished. One can usually achieve better performance by visualizing all processes on a graph, and trying to reduce the graph making maximum use of parallelism, and making “chains” of consecutive processing as short as possible. Again, partitioning of big tables and of their indices can really help.

Extract, transform, load – Performance

Another common issue occurs when the data is spread between several databases, and processing is done in those databases sequentially. Sometimes database replication may be involved as a method of copying data between databases – and this can significantly slow down the whole process. The common solution is to reduce the processing graph to only three layers:

Extract, transform, load – Performance

This allows processing to take maximum advantage of parallel processing. For example, if you need to load data into two databases, you can run the loads in parallel (instead of loading into 1st – and then replicating into the 2nd).

Extract, transform, load – Performance

Sometimes processing must take place sequentially. For example, dimensional (reference) data is needed before one can get and validate the rows for main “fact” tables.

Surrogate key – Performance

Surrogate keys tend to be a compact data type, such as a four-byte integer. This allows the database to query the single key column faster than it could multiple columns. Furthermore a non-redundant distribution of keys causes the resulting b-tree index to be completely balanced. Surrogate keys are also less expensive to join (fewer columns to compare) than compound keys.

Design by contract – Performance implications

Contract conditions should never be violated during execution of a bug-free program. Contracts are therefore typically only checked in debug mode during software development. Later at release, the contract checks are disabled to maximize performance.

Design by contract – Performance implications

In many programming languages, contracts are implemented with assert. Asserts are by default compiled away in release mode in C/C++, and similarly deactivated in C#/Java. This effectively eliminates the run-time costs of contracts in release.

Performance

Performance measurement is the process of collecting, analyzing and/or reporting information regarding the performance of an individual, group, organization, system or component.

Performance

The means of expressing appreciation can vary by culture. Chinese performers will clap with audience at the end of a performance; the return applause signals “thank you” to the audience. In Japan, folk performing arts performances commonly attract individuals who take photographs, sometimes getting up to the stage and within inches of performer’s faces.

Performance

Sometimes the dividing line between performer and the audience may become blurred, as in the example of “participatory theatre” where audience members get involved in the production.

Performance

Theatrical performances can take place daily or at some other regular interval. Performances can take place at designated performance spaces (such as a theatre or concert hall), or in a non-conventional space, such as a subway station, on the street, or in someone’s home.

Performance – Performance genres

Examples of performance genres include:

Performance – Performance genres

Music performance (a concert or a recital) may take place indoors in a concert hall or outdoors in a field, and may require the audience to remain very quiet, or encourage them to sing and dance along with the music.

Performance – Performance genres

A performance may also describe the way in which an actor performs. In a solo capacity, it may also refer to a mime artist, comedian, conjurer, or other entertainer.

Performance – Live performance event support overview

Live performance events have a long history of using visual scenery, lighting, costume amplification and a shorter history of visual projection and sound amplification reinforcement

Performance – Bibliography

Espartaco Carlos, Eduardo Sanguinetti: The Experience of Limits,(Ediciones de Arte Gaglianone, first published 1989) ISBN 950-9004-98-7.

Performance – Bibliography

Philip V. Bohlman, Marcello Sorce Keller, and Loris Azzaroni (eds.), Musical Anthropology of the Mediterranean: Interpretation, Performance, Identity, Bologna, Edizioni Clueb – Cooperativa Libraria Universitaria Editrice, 2009.

Fast Infoset – Performance

Because Fast Infosets are compressed as part of the XML generation process, they are much faster than using Zip-style compression algorithms on an XML stream, although they can produce slightly larger files.

Fast Infoset – Performance

SAX-type parsing performance of Fast Infoset is also much faster than parsing performance of XML 1.0, even without any Zip-style compression. Typical increases in parsing speed observed for the reference Java implementation are a factor of 10 compared to Java Xerces, and a factor of 4 compared to the Piccolo driver (one of the fastest Java-based XML parsers).

Flashlight – Performance standards

The United States Army former standard MIL-F-3747E described the performance standard for plastic flashlights using two or three D cell dry batteries, in either straight or angle form, and in standard, explosion-proof, heat-resistant, traffic direction, and inspection types. The standard described only incandescent lamp flashlights and was withdrawn in 1996.

Flashlight – Performance standards

In the United States, ANSI in 2009 published FL1 Flashlight basic performance standard

Flashlight – Performance standards

The FL1 standard requires measurements reported on the packaging to be made with the type of batteries packaged with the flashlight, or with an identified type of battery

Flashlight – Performance standards

The working distance is defined as the distance at which the maximum light falling on a surface (illuminance) would fall to 0.25 lux

Flashlight – Performance standards

Run time is measured using the supplied or specified batteries and letting the light run until the intensity of the beam has dropped to 10% of the value 30 seconds after switching on

Flashlight – Performance standards

Impact resistance is measured by dropping the flashlight in six different orientations and observing that it still functions and has no large cracks or breaks in it; the height used in the test is reported

Flashlight – Performance standards

The consumer must decide how well the ANSI test conditions match his requirements, but all manufacturers testing to the FL1 standard can be compared on a uniform basis

Flashlight – Performance standards

ANSI standard FL1 does not specify measurements of the beam width angle but the candela intensity and total lumen ratings can be used by the consumer to assess the beam characteristics

Emirates (airline) – Financial and operational performance

In the financial year 2011–12, Emirates generated revenues of around AED 62 billion, which represented an increase of approximately 15% over the previous year’s revenues of AED 54 billion

Emirates (airline) – Financial and operational performance

As of March 2012, Emirates did not use fuel price hedging. Fuel was 45% of total costs, and may come to $1.7 billion in the year ending 31 March 2012.

Emirates (airline) – Financial and operational performance

In November 2013, Emirates announced its half-year profits, showing a good performance despite high fuel prices and global economic pressure. For the first six months of the fiscal year the revenues reached AED 42.3 billion, an increase of 13% from 2012.

Emirates (airline) – Financial and operational performance

The airline was the seventh-largest airline in the world in terms of international passengers carried, and the largest in the world in terms of scheduled international passenger-kilometers flown. It is also the seventh-largest in terms of scheduled freight tonne-kilometres flown (sixth in scheduled international freight tonne-kilometres flown).

Emirates (airline) – Financial and operational performance

Year Ended Passengers Flown (thousand) Cargo carried (thousand) Turnover (AEDm) Expenditure (AEDm) Net Profit(+)/Loss(-) (AEDm)

Online advertising – Other performance-based compensation

CPA (Cost Per Action or Cost Per Acquisition) or PPP (Pay Per Performance) advertising means the advertiser pays for the number of users who perform a desired activity, such as completing a purchase or filling out a registration form. Performance-based compensation can also incorporate revenue sharing, where publishers earn a percentage of the advertiser’s profits made as a result of the ad. Performance-based compensation shifts the risk of failed advertising onto publishers.:4, 16

Heat sink – Methods to determine performance

This section will discuss the aforementioned methods for the determination of the heat sink thermal performance.

Storage virtualization – Performance and scalability

In some implementations the performance of the physical storage can actually be improved, mainly due to caching

Storage virtualization – Performance and scalability

Due to the nature of virtualization, the mapping of logical to physical requires some processing power and lookup tables. Therefore every implementation will add some small amount of latency.

Storage virtualization – Performance and scalability

In addition to response time concerns, throughput has to be considered. The bandwidth into and out of the meta-data lookup software directly impacts the available system bandwidth. In asymmetric implementations, where the meta-data lookup occurs before the information is read or written, bandwidth is less of a concern as the meta-data are a tiny fraction of the actual I/O size. In-band, symmetric flow through designs are directly limited by their processing power and connectivity bandwidths.

Storage virtualization – Performance and scalability

Most implementations provide some form of scale-out model, where the inclusion of additional software or device instances provides increased scalability and potentially increased bandwidth. The performance and scalability characteristics are directly influenced by the chosen implementation.

Hardware random number generator – Performance test

Hardware random number generators should be constantly monitored for proper operation. RFC 4086, FIPS Pub 140-2 and NIST Special Publication 800-90b include tests which can be used for this. Also see the documentation for the New Zealand cryptographic software library cryptlib.

Hardware random number generator – Performance test

Since many practical designs rely on a hardware source as an input, it will be useful to at least check that the source is still operating

Industrial and organizational psychology – Performance appraisal/management

Performance management may also include documenting and tracking performance information for organization-level evaluation purposes.

Industrial and organizational psychology – Performance appraisal/management

Additionally, the I–O psychologist may consult with the organization on ways to use the performance appraisal information for broader performance management initiatives.

Industrial and organizational psychology – Job performance

Job performance is about behaviors that are within the control of the employee and not about results (effectiveness), the costs involved in achieving results (productivity), the results that can be achieved in a period of time (efficiency), or the value an organization places on a given level of performance, effectiveness, productivity or efficiency (utility).

Industrial and organizational psychology – Job performance

Here, in-role performance was reflected through how well “employees met their performance expectations and performed well at the tasks that made up the employees’ job.” Dimensions regarding how well the employee assists others with their work for the benefit of the group, if the employee voices new ideas for projects or changes to procedure and whether the employee attends functions that help the group composed the extra-role category.

Industrial and organizational psychology – Job performance

These factors include errors in job measurement techniques, acceptance and the justification of poor performance and lack of importance of individual performance.

Industrial and organizational psychology – Job performance

The interplay between these factors show that an employee may, for example, have a low level of declarative knowledge, but may still have a high level of performance if the employee has high levels of procedural knowledge and motivation.

Industrial and organizational psychology – Job performance

Further, an expanding area of research in job performance determinants includes emotional intelligence.

Conscientiousness – Academic and workplace performance

Furthermore, conscientiousness is the only personality trait that correlates with performance across all categories of jobs

Moore’s law – Transistor count versus computing performance

The exponential processor transistor growth predicted by Moore does not always translate into exponentially greater practical CPU performance

Moore’s law – Transistor count versus computing performance

Another source of improved performance is due to microarchitecture techniques exploiting the growth of available transistor count. These increases are empirically described by Pollack’s rule which states that performance increases due to microarchitecture techniques are square root of the number of transistors or the area of a processor.

Moore’s law – Transistor count versus computing performance

Viewed even more broadly, the speed of a system is often limited by factors other than processor speed, such as internal bandwidth and storage speed, and one can judge a system’s overall performance based on factors other than speed, like cost efficiency or electrical efficiency.

Group cohesiveness – Group Performance

Group performance, like exclusive entry, increases the value of group membership to its members and influences members to identify more strongly with the team and to want to be actively associated with it .

Group cohesiveness – Cohesion and Performance

In general, cohesion defined in all these ways was positively related with performance.

Group cohesiveness – Cohesion and Performance

There is some evidence that cohesion may be more strongly related to performance for groups that have highly interdependent roles than for groups in which members are independent.

Group cohesiveness – Cohesion and Performance

In regards to group productivity, having attraction and group pride may not be enough. It is necessary to have task commitment in order to be productive. Furthermore, groups with high performance goals were extremely productive.

Expectancy theory – Expectancy: Effort ? Performance (E?P)

Control is one’s perceived control over performance

Expectancy theory – Instrumentality: Performance ? Outcome (P?O)

Instrumentality is the belief that a person will receive a reward if the performance expectation is met. This reward may come in the form of a pay increase, promotion, recognition or sense of accomplishment. Instrumentality is low when the reward is the same for all performances given.

Expectancy theory – Instrumentality: Performance ? Outcome (P?O)

Instrumentality is increased when formalized policies associate rewards to performance.

Gbridge – Performance

Gbridge claims to establish direct link between computers even if behind NAT by some forms of UDP hole punching. But when the direct link is impossible, it would relay the encrypted data through gbridge server. That would have an impact on its network performance. It is also noticed that the AutoSync traffic are never relayed.

Gbridge – Performance

The LiveBrowse feature works reasonably well for picture heavy folders and for mp3 online play over standard DSL. The flv online play is a little choppy sometimes because the bitrate of most flv files are very close to the uplink speed limit of a standard DSL (300kbit/s).

ISO 14000 – Act – take action to improve performance of EMS based on results

After the checking stage, a management review is conducted to ensure that the objectives of the EMS are being met, the extent to which they are being met, that communications are being appropriately managed and to evaluate changing circumstances, such as legal requirements, in order to make recommendations for further improvement of the system (Standards Australia/Standards New Zealand 2004)

Head-mounted display – Performance parameters

Ability to show stereoscopic imagery

Head-mounted display – Performance parameters

Interpupillary Distance (IPD). This is the distance between the two eyes, measured at the pupils, and is important in designing Head-Mounted Displays.

Head-mounted display – Performance parameters

Field of view (FOV) – Humans have an FOV of around 180°, but most HMDs offer considerably less than this

Head-mounted display – Performance parameters

Resolution – HMDs usually mention either the total number of pixels or the number of pixels per degree

Head-mounted display – Performance parameters

Binocular overlap – measures the area that is common to both eyes

Head-mounted display – Performance parameters

Distant focus (‘Collimation’). Optical techniques may be used to present the images at a distant focus, which seems to improve the realism of images that in the real world would be at a distance.

Head-mounted display – Performance parameters

On-board processing and Operating System. Some HMD vendors offer on-board Operating Systems such as Android, allowing applications to run locally on the HMD and eliminating the need to be tethered to an external device to generate video. These are sometimes referred to as Smart Goggles.

Ferroelectric RAM – Performance

DRAM performance is limited by the rate at which the charge stored in the cells can be drained (for reading) or stored (for writing). In general, this ends up being defined by the capability of the control transistors, the capacitance of the lines carrying power to the cells, and the heat that power generates.

Ferroelectric RAM – Performance

FeRAM is based on the physical movement of atoms in response to an external field, which happens to be extremely fast, settling in about 1 ns

Ferroelectric RAM – Performance

In comparison to flash, the advantages are much more obvious. Whereas the read operation is likely to be similar in performance, the charge pump used for writing requires a considerable time to “build up” current, a process that FeRAM does not need. Flash memories commonly need a millisecond or more to complete a write, whereas current FeRAMs may complete a write in less than 150 ns.

Ferroelectric RAM – Performance

The theoretical performance of FeRAM is not entirely clear. Existing 350 nm devices have read times on the order of 50-60 ns. Although slow compared to modern DRAMs, which can be found with times on the order of 2 ns, common 350 nm DRAMs operated with a read time of about 35 ns, so FeRAM performance appears to be comparable given the same fabrication technology.

Magnetoresistive random-access memory – Performance

DRAM performance is limited by the rate at which the charge stored in the cells can be drained (for reading) or stored (for writing)

Magnetoresistive random-access memory – Performance

This makes it expensive, which is why it is used only for small amounts of high-performance memory, notably the CPU cache in almost all modern CPU designs.

Magnetoresistive random-access memory – Performance

Although MRAM is not quite as fast as SRAM, it is close enough to be interesting even in this role. Given its much higher density, a CPU designer may be inclined to use MRAM to offer a much larger but somewhat slower cache, rather than a smaller but faster one. It remains to be seen how this trade-off will play out in the future.

Satellite Internet access – 2013 FCC report cites big jump in satellite performance

In its report released in February, 2013, the Federal Communications Commission noted significant advances in satellite Internet performance. The FCC’s Measuring Broadband America report also ranked the major ISPs by how close they came to delivering on advertised speeds. In this category, satellite Internet topped the list, with 90% of subscribers seeing speeds at 140% or better than what was advertised.

Adaptive performance

In previous literature, Pulakos and colleagues established eight dimensions of adaptive performance

Adaptive performance – Dimensions

Handling emergencies and crisis situations: making quick decisions when faced with an emergency

Adaptive performance – Dimensions

Handling stress in the workforce: keeping composed and focused on task at hand when dealing with high demand tasks

Adaptive performance – Dimensions

Creative problem solving: thinking outside the boundary limits, and innovatively to solve a problem

Adaptive performance – Dimensions

Dealing with uncertain and unpredictable work situations: able to become productive despite the occurrence of unknown situations

Adaptive performance – Dimensions

Learning and manipulating new technology, task, and procedures: approach new methods and technological constructs in order to accomplish a work task

Adaptive performance – Dimensions

Demonstrating cultural adaptability: being respectful and considerate of different cultural backgrounds

Adaptive performance – Dimensions

Demonstrating physically oriented adaptability: physically adjusting one ’s self to better fit the surrounding environment

Adaptive performance – Measurement

Therefore there is a difference between I-ADAPT-M and the JAI which measures adaptive performance as behaviors

Adaptive performance – Work stress and adaptive performance

Not only can work stress predict adaptive performance to a considerable extent, there are also a lot of overlaps between adaptive performance and stress coping.

Adaptive performance – Stress appraisal

Challenging rather than threatening appraisals would lead to higher levels of self-efficacy, and thus benefit individuals’ adaptive performance.

Adaptive performance – Stress coping

Therefore, adaptive performance is more likely to contain such behaviors in stressful situations.

Adaptive performance – Definition of team adaptive performance

Team adaptive performance also has different antecedents compared with individual adaptive performance.

Adaptive performance – Predictors of team adaptive performance

Team learning climate also displays a significant, positive relationship with team adaptive performance.

Adaptive performance – Leadership and adaptive performance

Adaptive performance in leadership is valued by employers because an employee who displays those two characteristics tends to exemplify and motivate adaptive behavior within other individuals in the workforce.

Adaptive performance – Tranformational leadership and adaptive performance

This particular leadership style has also been shown as a motivator to increase the behavior of performance and adaptability in employees

Adaptive performance – Leadership and adaptive decision making

By a leader displaying adaptive performance when making a decision, the team leader shows their awareness of a situation leading to new actions and strategies to reestablish fit and effectiveness

Software testing – Software performance testing

Performance testing is generally executed to determine how a system or sub-system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.

Software testing – Software performance testing

Load testing is primarily concerned with testing that the system can continue to operate under a specific load, whether that be large quantities of data or a large number of users

Software testing – Software performance testing

There is little agreement on what the specific goals of performance testing are. The terms load testing, performance testing, scalability testing, and volume testing, are often used interchangeably.

Software testing – Software performance testing

Real-time software systems have strict timing constraints. To test if timing constraints are met, real-time testing is used.

Standard RAID levels – Performance

Note that these are best case performance scenarios with optimal access patterns.

Standard RAID levels – Performance (speed)

RAID 6 does not have a performance penalty for read operations, but it does have a performance penalty on write operations because of the overhead associated with parity calculations. Performance varies greatly depending on how RAID 6 is implemented in the manufacturer’s storage architecture – in software, firmware or by using firmware and specialized ASICs for intensive parity calculations. It can be as fast as a RAID-5 system with one fewer drive (same number of data drives).

Comparison of programming paradigms – Performance comparison

Purely in terms of total instruction path length, a program coded in an imperative style, without using any subroutines at all, would have the lowest count. However, the binary size of such a program might be larger than the same program coded using subroutines (as in functional and procedural programming) and would reference more “non-local” physical instructions that may increase cache misses and increase instruction fetch overhead in modern processors.

Comparison of programming paradigms – Performance comparison

The paradigms that use subroutines extensively (including functional, procedural and object-oriented) and do not also use significant inlining (via compiler optimizations) will, consequently, use a greater percentage of total resources on the subroutine linkages themselves

Huawei – Recent performance

In April 2011, Huawei announced an earnings increase of 30% in 2010, driven by significant growth in overseas markets, with net profit rising to RMB23.76 billion (US$3.64 billion; £2.23 billion) from RMB18.27 billion in 2009

Huawei – Recent performance

Huawei’s revenues in 2010 accounted for 15.7% of the $78.56 billion global carrier-network-infrastructure market, putting the company second behind the 19.6% share of Telefon AB L.M. Ericsson, according to market-research firm Gartner.

Huawei – Recent performance

Huawei is targeting a revenue of $150 million through its enterprise business solutions in India in next 12 months. It denied using Chinese subsidies to gain global market share after being recently accused by US lawmakers and EU officials of unfair competition.

Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents   1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation

Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.

Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents   1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation

Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.

Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents   1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation

EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.

Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents   1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation

Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.

Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents   1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation

In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.

Artificial brainArtificial brain (or artificial mind) is a term commonly used in the media to describe research that aims to develop software and hardware with cognitive abilities similar to those of the animal or human brain. Research investigating “artificial brains” plays three important roles in science:An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, in theory, to create a machine that has all the capabilities of a human being.A serious long term project to create machines with strong AI, capable of general intelligent action (or Artificial General Intelligence), i.e. as intelligent as a human being.An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create “neurospheres” (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer’s, Motor Neurone and Parkinson’s Disease.The second objective is a reply to arguments such as John Searle’s Chinese room argument, Hubert Dreyfus’ critique of AI or Roger Penrose’s argument in The Emperor’s New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper “Computing Machinery and Intelligence”.The third objective is generally called artificial general intelligence by researchers. However Kurzweil prefers the more memorable term Strong AI. In his book The Singularity is Near he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on grounds of computer power continuing an exponential growth trend) that this could be done by 2025. Henry Markram, director of the Blue Brain project (which is attempting brain emulation), made a similar claim (2020) at the Oxford TED conference in 2009.Contents   1 Approaches to brain simulation2 Artificial brain thought experiment4 Notes and referencesApproaches to brain simulation[edit]Estimates of how much processing power is needed to emulate a human brain at verious levels (from Ray Kurzweil, and Anders Sandberg and Nick Bostrom), along with the fastest supercomputer from TOP500 mapped by year.Although direct brain emulation using artificial neural networks on a high-performance computing engine is a common approach, there are other approaches. An alternative artificial brain implementation could be based on Holographic Neural Technology (HNeT) non linear phase coherence/decoherence principles. The analogy has been made to quantum processes through the core synaptic algorithm which has strong similarities to the QM wave equation.EvBrain is a form of evolutionary software that can evolve “brainlike” neural networks, such as the network immediately behind the retina.Since November 2008, IBM received a $4.9 million grant from the Pentagon for research into creating intelligent computers. The Blue Brain project is being conducted with the assistance of IBM in Lausanne. The project is based on the premise that it is possible to artificially link the neurons “in the computer” by placing thirty million synapses in their proper three-dimensional position.In March 2008, Blue Brain project was progressing faster than expected: “Consciousness is just a massive amount of information being exchanged by trillions of brain cells.” Some proponents of strong AI speculate that computers in connection with Blue Brain and Soul Catcher may exceed human intellectual capacity by around 2015, and that it is likely that we will be able to download the human brain at some time around 2050.There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic. In particular brains (including the human brain) and cognition are not currently well understood, and the scale of computation required is unknown. In addition there seem to be power constraints. The brain consumes about 20W of power whereas supercomputers may use as much as 1MW or an order of 100,000 more (note: Landauer limit is 3.5×1020 op/sec/watt at room temperature).Artificial brain thought experiment[edit]Some critics of brain simulation believe that it is simpler to create general intelligent action directly without imitating nature. Some commentators have used the analogy that early attempts to construct flying machines modeled them after birds, but that modern aircraft do not look like birds. A computational argument is used in AI – What is this, where it is shown that, if we have a formal definition of general AI, the corresponding program can be found by enumerating all possible programs and then testing each of them to see whether it matches the definition. No appropriate definition currently exists. – Approaches to brain simulation

There are good reasons to believe that, regardless of implementation strategy, the predictions of realising artificial brains in the near future are optimistic

Visual Basic – Performance and other issues

Earlier versions of Visual Basic (prior to version 5) compiled the code to P-Code only. The P-Code is interpreted by the language runtime. The benefits of P-Code include portability and smaller binary file sizes, but it usually slows down the execution, since having a runtime adds an additional layer of interpretation. However, small amounts of code and algorithms can be constructed to run faster than compiled native code.

Visual Basic – Performance and other issues

Visual Basic applications require Microsoft Visual Basic runtime MSVBVMxx.DLL, where xx is the relevant version number, either 50 or 60. MSVBVM60.dll comes as standard with Windows in all editions after Windows 98 while MSVBVM50.dll comes with all editions after Windows 95. A Windows 95 machine would however require inclusion with the installer of whichever dll was needed by the program.

Visual Basic – Performance and other issues

Visual Basic 5 and 6 can compile code to either native or P-Code but in either case the runtime is still required for built in functions and forms management.

Visual Basic – Performance and other issues

Criticisms levelled at Visual Basic editions prior to VB.NET include:

Visual Basic – Performance and other issues

Versioning problems associated with various runtime DLLs, known as DLL hell

Visual Basic – Performance and other issues

Poor support for object-oriented programming

Visual Basic – Performance and other issues

Inability to create multi-threaded applications, without resorting to Windows API calls

Visual Basic – Performance and other issues

Variant types have a greater performance and storage overhead than strongly typed programming languages

Biometrics – Performance

The following are used as performance metrics for biometric systems:

Biometrics – Performance

false acceptance rate or false match rate (FAR or FMR): the probability that the system incorrectly matches the input pattern to a non-matching template in the database. It measures the percent of invalid inputs which are incorrectly accepted. In case of similarity scale, if the person is imposter in real, but the matching score is higher than the threshold, then he is treated as genuine that increase the FAR and hence performance also depends upon the selection of threshold value.

Biometrics – Performance

false rejection rate or false non-match rate (FRR or FNMR): the probability that the system fails to detect a match between the input pattern and a matching template in the database. It measures the percent of valid inputs which are incorrectly rejected.

Biometrics – Performance

This more linear graph illuminates the differences for higher performances (rarer errors).

Biometrics – Performance

equal error rate or crossover error rate (EER or CER): the rate at which both accept and reject errors are equal. The value of the EER can be easily obtained from the ROC curve. The EER is a quick way to compare the accuracy of devices with different ROC curves. In general, the device with the lowest EER is most accurate.

Biometrics – Performance

failure to enroll rate (FTE or FER): the rate at which attempts to create a template from an input is unsuccessful. This is most commonly caused by low quality inputs.

Biometrics – Performance

failure to capture rate (FTC): Within automatic systems, the probability that the system fails to detect a biometric input when presented correctly.

Biometrics – Performance

template capacity: the maximum number of sets of data which can be stored in the system.

Garbage collection (computer science) – Performance implications

Tracing garbage collectors require some implicit runtime overhead that may be beyond the control of the programmer, and can sometimes lead to performance problems. For example, commonly used stop-the-world garbage collectors, which pause program execution at arbitrary times, may make garbage collection inappropriate for some embedded systems, high-performance server software, and applications with real-time needs.

Garbage collection (computer science) – Performance implications

Manual heap allocation

Garbage collection (computer science) – Performance implications

search for best/first-fit block of sufficient size

Garbage collection (computer science) – Performance implications

Memory allocation in a garbage collected language may be implemented using heap allocation behind the scenes (rather than simply incrementing a pointer), so the performance advantages listed above don’t necessarily apply in this case

Garbage collection (computer science) – Performance implications

The overhead of write barriers is more likely to be noticeable in an imperative-style program which frequently writes pointers into existing data structures than in a functional-style program which constructs data only once and never changes them.

Garbage collection (computer science) – Performance implications

Generational collection techniques are used with both stop-the-world and incremental collectors to increase performance; the trade-off is that some garbage is not detected as such for longer than normal.

Dow Chemical Company – Performance plastics

Performance plastics make up 25% of Dow’s sales, with many products designed for the automotive and construction industries. The plastics include polyolefins such as polyethylene and polypropylene, as well as polystyrene used to produce Styrofoam insulating material. Dow manufactures epoxy resin intermediates including bisphenol A and epichlorohydrin. Saran resins and films are based on polyvinylidene chloride (PVDC)

Dow Chemical Company – Performance chemicals

The Performance Chemicals (17% of sales) segment produces chemicals and materials for water purification, pharmaceuticals, paper coatings, paints and advanced electronics

Halogen lamp – Effect of voltage on performance

Tungsten halogen lamps behave in a similar manner to other incandescent lamps when run on a different voltage

Halogen lamp – Effect of voltage on performance

Halogen lamps are manufactured with enough halogen to match the rate of tungsten evaporation at their design voltage

Sport psychology – Preperformance routines

This includes pregame routines, warm up routines, and actions an athlete will regularly do, mentally and physically, before they execute the performance

Audi A4 – Performance

2.0 TFSI, 8-speed Multitronic CVT 8.2 236 km/h (147 mph) 167 Aus/NZ/ZA

Audi A4 – Performance

3.2 FSI quattro, 6-speed Manual 6.0 250 km/h (155 mph) (elec. limited) 213

Audi A4 – Performance

3.2 FSI quattro, 6-speed Tiptronic 6.1 250 km/h (155 mph) (elec. limited) 215

Audi A4 – Performance

Diesel engines (all common rail (CR) Turbocharged Direct Injection (TDI))

Audi A4 – Performance

2.0 TDI quattro, 6-speed Manual 8.3 226 km/h (140 mph) 149

Audi A4 – Performance

3.0 TDI quattro, 6-speed Tiptronic 6.3 250 km/h (155 mph) (elec. limited) 182

Krytron – Performance

This design, dating from the late 1940s, is still capable of pulse-power performance which even the most advanced semiconductors (even IGBTs) cannot match easily. Krytrons and sprytrons are capable of handling high current high voltage pulses, with very fast switching times, constant, low, and low jitter time delay between application of the trigger pulse and switching on.

Krytron – Performance

A given krytron tube will give very consistent performance to identical trigger pulses (low jitter)

Krytron – Performance

Switching performance is largely independent of the environment (temperature, acceleration, vibration, etc.). The formation of the keepalive glow discharge is however more sensitive, which necessitates the use of a radioactive source to aid its ignition.

Krytron – Performance

Krytrons have a limited lifetime, ranging, according to type, typically from tens of thousands to tens of millions of switching operations, and sometimes only a few hundreds.

Krytron – Performance

Hydrogen-filled thyratrons may be used as a replacement in some applications.

Scramjet – Vehicle performance

The performance of a launch system is complex and depends greatly on its weight. Normally craft are designed to maximise range (), orbital radius () or payload mass fraction () for a given engine and fuel. This results in tradeoffs between the efficiency of the engine (takeoff fuel weight) and the complexity of the engine (takeoff dry weight), which can be expressed by the following:

Scramjet – Vehicle performance

is the empty mass fraction, and represents the weight of the superstructure, tankage and engine.

Scramjet – Vehicle performance

is the fuel mass fraction, and represents the weight of fuel, oxidiser and any other materials which are consumed during the launch.

Scramjet – Vehicle performance

is initial mass ratio, and is the inverse of the payload mass fraction. This represents how much payload the vehicle can deliver to a destination.

Scramjet – Vehicle performance

A scramjet increases the mass of the engine over a rocket, and decreases the mass of the fuel

Scramjet – Vehicle performance

Additionally, the drag of the new configuration must be considered. The drag of the total configuration can be considered as the sum of the vehicle drag () and the engine installation drag (). The installation drag traditionally results from the pylons and the coupled flow due to the engine jet, and is a function of the throttle setting. Thus it is often written as:

Scramjet – Vehicle performance

For an engine strongly integrated into the aerodynamic body, it may be more convenient to think of () as the difference in drag from a known base configuration.

Scramjet – Vehicle performance

The overall engine efficiency can be represented as a value between 0 and 1 (), in terms of the specific impulse of the engine:

Scramjet – Vehicle performance

is the acceleration due to gravity at ground level

Scramjet – Vehicle performance

is fuel heat of reaction

Scramjet – Vehicle performance

Specific impulse is often used as the unit of efficiency for rockets, since in the case of the rocket, there is a direct relation between specific impulse, specific fuel consumption and exhaust velocity. This direct relation is not generally present for airbreathing engines, and so specific impulse is less used in the literature. Note that for an airbreathing engine, both and are a function of velocity.

Scramjet – Vehicle performance

The specific impulse of a rocket engine is independent of velocity, and common values are between 200 and 600 seconds (450s for the space shuttle main engines). The specific impulse of a scramjet varies with velocity, reducing at higher speeds, starting at about 1200s, although values in the literature vary.

Scramjet – Vehicle performance

For the simple case of a single stage vehicle, the fuel mass fraction can be expressed as:

Scramjet – Vehicle performance

or for level atmospheric flight from air launch (missile flight):

Scramjet – Vehicle performance

Where is the range, and the calculation can be expressed in the form of the Breguet range formula:

Scramjet – Vehicle performance

This extremely simple formulation, used for the purposes of discussion assumes:

Scramjet – Vehicle performance

However they are true generally for all engines.

JPEG 2000 – Superior compression performance

At high bit rates, artifacts become nearly imperceptible, JPEG 2000 has a small machine-measured fidelity advantage over JPEG. At lower bit rates (e.g., less than 0.25 bits/pixel for grayscale images), JPEG 2000 has a significant advantage over certain modes of JPEG: artifacts are less visible and there is almost no blocking. The compression gains over JPEG are attributed to the use of DWT and a more sophisticated entropy encoding scheme.

JPEG 2000 – Performance

Compared to the previous JPEG standard, JPEG 2000 delivers a typical compression gain in the range of 20%, depending on the image characteristics

Perl – Comparative performance

The Computer Language Benchmarks Game, a project hosted by Alioth, compares the performance of implementations of typical programming problems in several programming languages. The submitted Perl implementations typically perform toward the high end of the memory-usage spectrum and give varied speed results. Perl’s performance in the benchmarks game is typical for interpreted languages.

Perl – Comparative performance

Large Perl programs start more slowly than similar programs in compiled languages because perl has to compile the source every time it runs

Perl – Comparative performance

A number of tools have been introduced to improve this situation. The first such tool was Apache’s mod_perl, which sought to address one of the most-common reasons that small Perl programs were invoked rapidly: CGI Web development. ActivePerl, via Microsoft ISAPI, provides similar performance improvements.

Perl – Comparative performance

Once Perl code is compiled, there is additional overhead during the execution phase that typically isn’t present for programs written in compiled languages such as C or C++. Examples of such overhead include bytecode interpretation, reference-counting memory management, and dynamic type-checking.

Laptop – Performance

The upper limits of performance of laptops remain much lower than the highest-end desktops (especially “workstation class” machines with two processor sockets), and “bleeding-edge” features usually appear first in desktops and only then, as the underlying technology matures, are adapted to laptops.

Laptop – Performance

For Internet browsing and typical office applications, where the computer spends the majority of its time waiting for the next user input, even relatively low-end laptops (such as Netbooks) can be fast enough for some users. As of mid-2010, at the lowest end, the cheapest netbooks—between US$200–300—remain more expensive than the lowest-end desktop computers (around US$200) only when those are priced without a screen/monitor. Once an inexpensive monitor is added, the prices are comparable.

Laptop – Performance

Most higher-end laptops are sufficiently powerful for high-resolution movie playback, some 3D gaming and video editing and encoding

Laptop – Performance

Some manufacturers work around this performance problem by using desktop CPUs for laptops.

Progress in artificial intelligence – Performance evaluation

The broad classes of outcome for an AI test are:

Progress in artificial intelligence – Performance evaluation

par-human: performs similarly to most humans

Peter Chen – Computer performance modeling

In his early career, he was active in R&D activities in computer system performance. He was the program chair of an ACM SIGMETRICS conference. He developed a computer performance model for a major computer vendor. His innovative research results were adopted in commercial compter performance tuning and capacity planning tools.

Shared leadership – Team effectiveness/performance

Similarly, other studies have explored the extent to which shared leadership can predict a team’s effectiveness or performance, and have found it to be a significant predictor, and often a better predictor than vertical leadership.

Shared leadership – Team effectiveness/performance

Thus, they theorized, having more leaders is not the only factor that matters to team performance; rather, leaders must recognize other leaders as such in order for them to contribute positively to team effectiveness.

Cordless telephone – Performance

Manufacturers usually advertise that higher frequency systems improve audio quality and range. Higher frequencies actually have worse propagation in the ideal case, as shown by the basic Friis transmission equation, and path loss tends to increase at higher frequencies as well. More important influences on quality and range are signal strength, antenna quality, the method of modulation used, and interference, which varies locally.

Cordless telephone – Performance

“Plain old telephone service” (POTS) landlines are designed to transfer audio with a quality that is just enough for the parties to understand each other

Cordless telephone – Performance

A noticeable amount of constant background noise (This is not interference from outside sources, but noise within the cordless telephone system.)

Cordless telephone – Performance

Frequency response not being the full frequency response available in a wired landline telephone

Cordless telephone – Performance

Most manufacturers claim a range of about 30 m (100 ft) for their 2.4 GHz and 5.8 GHz systems, but inexpensive models often fall short of this claim.

Cordless telephone – Performance

However, the higher frequency often brings advantages

Cordless telephone – Performance

The recently allocated 1.9 GHz band is reserved for use by phones that use the DECT standard, which should avoid interference issues that are increasingly being seen in the unlicensed 900 MHz, 2.4 GHz, and 5.8 GHz bands.

Cordless telephone – Performance

Many cordless phones in the early 21st century are digital. Digital technology has helped provide clear sound and limit eavesdropping. Many cordless phones have one main base station and can add up to three or four additional bases. This allows for multiple voice paths that allow three-way conference calls between the bases. This technology also allows multiple handsets to be used at the same time and up to two handsets can have an outside conversation.

Speech recognition – High-performance fighter aircraft

Substantial efforts have been devoted in the last decade to the test and evaluation of speech recognition in fighter aircraft

Speech recognition – High-performance fighter aircraft

Working with Swedish pilots flying in the JAS-39 Gripen cockpit, Englund (2004) found recognition deteriorated with increasing G-loads

Speech recognition – High-performance fighter aircraft

The Eurofighter Typhoon currently in service with the UK RAF employs a speaker-dependent system, i.e

Speech recognition – High-performance fighter aircraft

Speaker independent systems are also being developed and are in testing for the F35 Lightning II (JSF) and the Alenia Aermacchi M-346 Master lead-in fighter trainer. These systems have produced word accuracies in excess of 98%.

Speech recognition – Performance

The performance of speech recognition systems is usually evaluated in terms of accuracy and speed. Accuracy is usually rated with word error rate (WER), whereas speed is measured with the real time factor. Other measures of accuracy include Single Word Error Rate (SWER) and Command Success Rate (CSR).

Speech recognition – Performance

However, speech recognition (by a machine) is a very complex problem. Vocalizations vary in terms of accent, pronunciation, articulation, roughness, nasality, pitch, volume, and speed. Speech is distorted by a background noise and echoes, electrical characteristics. Accuracy of speech recognition vary with the following:

Speech recognition – Performance

Vocabulary size and confusability

Speech recognition – Performance

Isolated, discontinuous, or continuous speech

Speech recognition – Performance

Task and language constraints

Speech recognition – Performance

Robot Interaction Language (ROILA) is a constructed language created to address the problems associated with speech interaction using natural languages. ROILA is constructed on the basis of two important goals, firstly it should be learnable by the human user and secondly, the language should be optimized for efficient recognition by a robot.

Kerosene lamp – Performance

Wick-type lamps have the lowest light output, and pressurized lamps have higher output; the range is from 20 to 100 lumens. A kerosene lamp producing 37 lumens for 4 hours per day will consume about 3 litres of kerosene per month.

XSLT – Performance

This gives substantial performance benefits in online publishing applications, where the same transformation is applied many times per second to different source documents

XSLT – Performance

Early XSLT processors had very few optimizations

Channel (communications) – Channel performance measures

These are examples of commonly used channel capacity and performance measures:

Channel (communications) – Channel performance measures

Symbol rate in baud, pulses/s or symbols/s

Channel (communications) – Channel performance measures

Digital bandwidth bit/s measures: gross bit rate (signalling rate), net bit rate (information rate), channel capacity, and maximum throughput

Channel (communications) – Channel performance measures

Channel utilization

Channel (communications) – Channel performance measures

Signal-to-noise ratio measures: signal-to-interference ratio, Eb/No, carrier-to-interference ratio in decibel

Forward error correction – Concatenated FEC codes for improved performance

Classical (algebraic) block codes and convolutional codes are frequently combined in concatenated coding schemes in which a short constraint-length Viterbi-decoded convolutional code does most of the work and a block code (usually Reed-Solomon) with larger symbol size and block length “mops up” any errors made by the convolutional decoder

Forward error correction – Concatenated FEC codes for improved performance

Concatenated codes have been standard practice in satellite and deep space communications since Voyager 2 first used the technique in its 1986 encounter with Uranus. The Galileo craft used iterative concatenated codes to compensate for the very high error rate conditions caused by having a failed antenna.

Performance-based advertising

Performance-based advertising is a form of advertising in which the purchaser pays only when there are measurable results. Performance-based advertising is becoming more common with the spread of electronic media, notably the Internet, where it is possible to measure user actions resulting from advertisement.

Performance-based advertising – Pricing models

There are four common pricing models used in the online performance advertising market.

Performance-based advertising – Pricing models

CPM (Cost-per-Mille, or Cost-per-Thousand) Pricing Models charge advertisers for impressions, i.e. the number of times people view an advertisement. Display advertising is commonly sold on a CPM pricing model. The problem with CPM advertising is that advertisers are charged even if the target audience does not click on the advertisement.

Performance-based advertising – Pricing models

CPC (Cost-per-Click) advertising overcomes this problem by charging advertisers only when the consumer clicks on the advertisement. However, due to increased competition, search keywords have become very expensive. A 2007 Doubleclick Performics Search trends Report shows that there were nearly six times as many keywords with a cost per click (CPC) of more than $1 in January 2007 than the prior year. The cost per keyword increased by 33% and the cost per click rose by as much as 55%.

Performance-based advertising – Pricing models

In recent times, there has been a rapid increase in online lead generation – banner and direct response advertising that works off a CPL pricing model. In a Cost-per-Lead pricing model, advertisers pay only for qualified leads – irrespective of the clicks or impressions that went into generating the lead. CPL advertising is also commonly referred to as online lead generation.

Performance-based advertising – Pricing models

Cost per Lead (CPL) pricing models are the most advertiser friendly. A recent IBM research study found that two-thirds of senior marketers expect 20 percent of ad revenue to move away from impression-based sales, in favor of action-based models within three years. CPL models allow advertisers to pay only for qualified leads as opposed to clicks or impressions and are at the pinnacle of the online advertising ROI hierarchy.

Performance-based advertising – Pricing models

In CPA advertising, advertisers pay for a specific action such as a Creditcard transaction (also called CPO, Cost-Per-Order).

Performance-based advertising – Pricing models

Advertisers need to be careful when choosing between CPL and CPA pricing models.

Performance-based advertising – Pricing models

In CPL campaigns, advertisers pay for an interested lead – i.e. the contact information of a person interested in the advertiser’s product or service. CPL campaigns are suitable for brand marketers and direct response marketers looking to engage consumers at multiple touch-points – by building a newsletter list, community site, reward program or member acquisition program.

Performance-based advertising – Pricing models

In CPA campaigns, the advertiser typically pays for a completed sale involving a Creditcard transaction. CPA is all about ‘now’ — it focuses on driving consumers to buy at that exact moment. If a visitor to the website doesn’t buy anything, there’s no easy way to re-market to them.

Performance-based advertising – Pricing models

1. CPL campaigns are advertiser-centric. The advertiser remains in control of their brand, selecting trusted and contextually relevant publishers to run their offers. On the other hand, CPA and affiliate marketing campaigns are publisher-centric. Advertisers cede control over where their brand will appear, as publishers browse offers and pick which to run on their websites. Advertisers generally do not know where their offer is running.

Performance-based advertising – Pricing models

2. CPL campaigns are usually high volume and light-weight. In CPL campaigns, consumers submit only basic contact information. The transaction can be as simple as an email address. On the other hand, CPA campaigns are usually low volume and complex. Typically, consumer has to submit Creditcard and other detailed information.

Performance-based advertising – Pricing models

CPL advertising is more appropriate for advertisers looking to deploy acquisition campaigns by re-marketing to end consumers through e-newsletters, community sites, reward programs, loyalty programs and other engagement vehicles.

Performance-based advertising – Economic benefits

Many advertisers have limited budgets and may not understand the most effective method of advertising. With performance-based advertising plans, they avoid the risk of paying large amounts for advertisements that are ineffective. They pay only for results.

Performance-based advertising – Economic benefits

The advertising agency, distributor or publisher assumes the risk, and is therefore motivated to ensure that the advertisement is well-targeted, making best use of the available inventory of advertising space. Electronic media publishers may choose advertisements based on location, time of day, day of week, demographics and performance history, ensuring that they maximize revenue earned from each advertising slot.

Performance-based advertising – Economic benefits

The close attention to targeting is intended to minimize the number of irrelevant advertisements presented to consumers. They see advertisements for products and services that are likely to interest them. Although consumers often state that advertisements are irritating, in many situations they find the advertisement useful if they are relevant.

Performance-based advertising – Metrics

Various types of measurable action may be used in charging for performance-based advertising:

Performance-based advertising – Metrics

Many Internet sites charge for advertising on a “CPM” (Cost per Thousand) or Cost per impression basis. That is, the advertiser pays only when a consumer sees their advertisement. Some would argue that this is not performance-based advertising since there is no measurement of the user response.

Performance-based advertising – Metrics

Internet sites often also offer advertising on a “PPC” (pay per click) basis. Google’s AdWords product and equivalent products from Yahoo!, Microsoft and others support PPC advertising plans.

Performance-based advertising – Metrics

A small but growing number of sites are starting to offer plans on a “Pay per call” basis. The user can click a button to place a VoIP call, or to request a call from the advertiser. If the user requests a call, presumably they are highly likely to make a purchase.

Performance-based advertising – Metrics

Finally, there is considerable research into methods of linking the user’s actions to the eventual purchase: the ideal form of performance measurement.

Performance-based advertising – Metrics

Some Internet sites are markets, bringing together buyers and sellers. eBay is a prominent example of a market operating on an auction basis. Other market sites let the vendors set their price. In either model, the market mediates sales and takes a commission – a defined percentage of the sale value. The market is motivated to give a more prominent position to vendors who achieve high sales value. Markets may be seen as a form of performance-based advertising.

Performance-based advertising – Metrics

The use of mobile coupons also enables a whole new world of metrics within identifying campaign effect. There are several providers of mobile coupon technology that makes it possible to provide unique coupons or barcodes to each individual person and at the same time identify the person downloading it. This makes it possible to follow these individuals during the whole process from downloading until when and where the coupons are redeemed.

Performance-based advertising – Media

Although the Internet introduced the concept of performance-based advertising, it is now spreading into other media.

Performance-based advertising – Media

The mobile telephone is increasingly used as a web browsing device, and can support both pay-per-click and pay-per-call plans

Performance-based advertising – Media

Directory assistance providers are starting to introduce advertising, particularly with “Free DA” services such as the Jingle Networks 1-800-FREE-411, the AT&T 1-800-YELLOWPAGES and the Google 1-800-GOOG-411. The advertiser pays when a caller listens to their advertisement, the equivalent of Internet CPM advertising, when they ask for additional information, or when they place a call.

Performance-based advertising – Media

IPTV promises to eventually combine features of cable television and the Internet. Viewers may see advertisements in a sidebar that are relevant to the show they are watching. They may click on an advertisement to obtain more details, and this action can be measured and used to charge the advertiser.

Performance-based advertising – Media

It is even possible to directly measure the performance of print advertising. The publisher prints a special telephone number in the advertisement, used nowhere else. When a consumer places a call to that number, the call event is recorded and the call is routed to the regular number. The call could only have been generated because of the print advertisement.

Performance-based advertising – Pricing

A publisher may charge defined prices for performance-based advertising, so much per click or call, but it is common for prices to be set through some form of “bidding” or auction arrangement. The advertiser states how much they are willing to pay for a user action, and the publisher provides feedback on how much other advertisers have offered. The actual amount paid may be lower than the amount bid, for example 1 cent more than the next highest bidder.

Performance-based advertising – Pricing

A “bidding” plan does not guarantee that the highest bidder will always be presented in the most prominent advertising slot, or will gain the most user actions. The publisher will want to earn the maximum revenue from each advertising slot, and may decide (based on actual results) that a lower bidder is likely to bring more revenue than a higher bidder – they will pay less but be selected more often.

Performance-based advertising – Pricing

In a competitive market, with many advertisers and many publications, defined prices and bid-based prices are likely to converge on the generally accepted value of an advertising action. This presumably reflects the expected sale value and the profit that will result from the sale. An item like a hotel room or airplane seat that loses all value if not sold may be priced at a higher ratio of sale value than an item like a bag of sand or box of nails that will retain its value over time.

Performance-based advertising – Pricing

A number of companies provide products or services to help optimize the bidding process, including deciding which keywords the advertiser should bid on and which sites will give best performance.

Performance-based advertising – Issues

There is the potential for fraud in performance-based advertising.

Performance-based advertising – Issues

The publication may report excessive performance results, although a reputable publication would be unlikely to take the risk of being exposed by audit.

Performance-based advertising – Issues

A competitor may arrange for automatically generated clicks on an advertisement

Performance-based advertising – Issues

Since the user’s actions are being measured, there are serious concerns of loss of privacy.

Performance-based advertising – Issues

Dellarocas (2010) discusses a number of ways in which performance-based advertising mechanisms can be enhanced to restore efficient pricing.

mdadm – Increasing RAID ReSync Performance

In order to increase the resync speed, we can use a bitmap, which mdadm will use to mark which areas may be out-of-sync. Add the bitmap with the grow option like below:

mdadm – Increasing RAID ReSync Performance

Note: mdadm – v2.6.9 – 10 March 2009 on Centos 5.5 requires this to be run on a stable “clean” array. If the array is rebuilding the following error will be displayed:

mdadm – Increasing RAID ReSync Performance

md: couldn’t update array info. -16

mdadm – Increasing RAID ReSync Performance

then verify that the bitmap was added to the md2 device using

mdadm – Increasing RAID ReSync Performance

you can also adjust Linux kernel limits by editing files /proc/sys/dev/raid/speed_limit_min and /proc/sys/dev/raid/speed_limit_max.

mdadm – Increasing RAID ReSync Performance

You can also edit this with the sysctl utility

Symantec – Application Performance Management business

On January 17, 2008, Symantec announced that it was spinning off its Application Performance Management (APM) business and the i3 product line to Vector Capital. Precise Software Solutions took over development, product management, marketing, and sales for the APM business, launching as an independent company on September 17, 2008.

Information retrieval – Performance and correctness measures

Many different measures for evaluating the performance of information retrieval systems have been proposed. The measures require a collection of documents and a query. All common measures described here assume a ground truth notion of relevancy: every document is known to be either relevant or non-relevant to a particular query. In practice queries may be ill-posed and there may be different shades of relevancy.

NXP Semiconductors – Focus on high-performance mixed signal and standard products

Current president and CEO Rick Clemmer took over from Frans van Houten on January 1, 2009. Clemmer has emphasized the importance of “high performance mixed signal” products as a key focus area for NXP. As of 2011, “standard products” including components such as small signal, power and integrated discretes accounted for 30 percent of NXP’s business.

NXP Semiconductors – Focus on high-performance mixed signal and standard products

On July 26, 2010, NXP announced that it had acquired Jennic based in Sheffield, UK, which now operates as part of its Smart Home and Energy product line, offering wireless connectivity solutions based on ZigBee and JenNet-IP.

NXP Semiconductors – Focus on high-performance mixed signal and standard products

On August 6, 2010, NXP announced its IPO at NASDAQ, with 34,000,000 shares, pricing each $14.

NXP Semiconductors – Focus on high-performance mixed signal and standard products

In December 2010, NXP announced that it would sell its Sound Solutions business to Knowles Electronics, part of Dover Corporation, for $855 million in cash. The acquisition was completed as of July 5, 2011.

NXP Semiconductors – Focus on high-performance mixed signal and standard products

In April 2012, NXP announced its intent to acquire electronic design consultancy Catena to work on automotive applications, to capitalize on growing demand for engine emissions reduction and car-to-infrastructure, car-to-car, and car-to-driver communication.

NXP Semiconductors – Focus on high-performance mixed signal and standard products

In July 2012, NXP sold its high-speed data converter assets to Integrated Device Technology.

NXP Semiconductors – Focus on high-performance mixed signal and standard products

In 2012, revenue for NXP’s Identification business unit was $986 million, up 41% from 2011, in part due to growing sales of NFC chips and secure elements.

NXP Semiconductors – Focus on high-performance mixed signal and standard products

On January 4, 2013, NXP and Cisco announced their investment in Cohda Wireless, an Australian company focused on car-to-car and car-to-infrastructure communications.

NXP Semiconductors – Focus on high-performance mixed signal and standard products

In January 2013, NXP announced 700-900 redundancies worldwide in an effort to cut costs related to “support services”.

NXP Semiconductors – Focus on high-performance mixed signal and standard products

In May 2013, NXP announced that it acquired Code Red Technologies, a provider of embedded software development such as the LPCXpresso IDE and Red Suite.

Nested RAID levels – Performance (speed)

According to manufacturer specifications and official independent benchmarks, in most cases RAID 10 provides better throughput and latency than all other RAID levels except RAID 0 (which wins in throughput).

Nested RAID levels – Performance (speed)

It is the preferable RAID level for I/O-intensive applications such as database, email, and web servers, as well as for any other use requiring high disk performance.

Discrete event simulation – Lab test performance improvement ideas

Many systems improvement ideas are built on sound principles, proven methodologies (Lean, Six Sigma, TQM, etc.) yet fail to improve the overall system. A simulation model allows the user to understand and test a performance improvement idea in the context of the overall system.

Marketing operations – Marketing Performance Measurement

Marketing Performance Measurement should be a logical extension of the Planning and Budgeting exercise that happens before each fiscal year. The goals that are set should be measurable and personal. Every person in the Marketing organization should know what they have to do to help the function, and the company, achieve its goals. Some companies use Management By Objectives (MBOs) to incent employees to meet goals. Other companies simply use the Human Resources Performance Management process.

Marketing operations – Marketing Performance Measurement

Quarterly Operations Reviews represent another good way to monitor Marketing’s progress towards its annual goals. At a Quarterly Operations Review, a CMO typically has direct reports present on achievements relative to the goals that were set. This is a good opportunity to update goals based on information gained during the quarter that has just ended. It is also a good way for Marketing leaders to stay abreast of their peers’ efforts to increase collaboration and eliminate redundant efforts.

Performance management

Performance management (PM) includes activities which ensure that goals are consistently being met in an effective and efficient manner. Performance management can focus on the performance of an organization, a department, employee, or even the processes to build a product of service, as well as many other areas.

Performance management

PM is also known as a process by which organizations align their resources, systems and employees to strategic objectives and priorities.

Performance management

Performance management as referenced on this page in a broad term coined by Dr. Aubrey Daniels in the late 1970s to describe a technology (i.e. science imbedded in applications methods) for managing both behavior and results, two critical elements of what is known as performance.

Performance management – Application

Armstrong and Baron (1998) defined it as a “strategic and integrated approach to increase the effectiveness of companies by improving the performance of the people who work in them and by developing the capabilities of teams and individual contributors.”

Performance management – Application

It may be possible to get all employees to reconcile personal goals with organizational goals and increase productivity and profitability of an organization using this process. It can be applied by organizations or a single department or section inside an organization, as well as an individual person. The performance process is appropriately named the self-propelled performance process (SPPP).

Performance management – Application

First, a commitment analysis must be done where a job mission statement is drawn up for each job. The job mission statement is a job definition in terms of purpose, customers, product and scope. The aim with this analysis is to determine the continuous key objectives and performance standards for each job position.

Performance management – Application

Following the commitment analysis is the work analysis of a particular job in terms of the reporting structure and job description. If a job description is not available, then a systems analysis can be done to draw up a job description. The aim with this analysis is to determine the continuous critical objectives and performance standards for each job.

Performance management – Benefits

Managing employee or system performance and aligning their objectives facilitates the effective delivery of strategic and operational goals. There is a clear and immediate correlation between using performance management programs or software and improved business and organizational results.

Performance management – Benefits

For employee performance management, using integrated software, rather than a spreadsheet based recording system, may deliver a significant return on investment through a range of direct and indirect sales benefits, operational efficiency benefits and by unlocking the latent potential in every employees work day (i.e. the time they spend not actually doing their job). Benefits may include:

Performance management – Benefits

Reduce costs in the organization

Performance management – Benefits

Decreases the time it takes to create strategic or operational changes by communicating the changes through a new set of goals

Performance management – Benefits

Optimizes incentive plans to specific goals for over achievement, not just business as usual

Performance management – Benefits

Improves employee engagement because everyone understands how they are directly contributing to the organizations high level goals

Performance management – Benefits

High confidence in bonus payment process

Performance management – Benefits

Professional development programs are better aligned directly to achieving business level goals

Performance management – Benefits

Helps audit / comply with legislative requirement

Performance management – Benefits

Simplifies communication of strategic goals scenario planning

Performance management – Benefits

Provides well documented and communicated process documentation

Performance management – Organizational Development

In organizational development (OD), performance can be thought of as Actual Results vs Desired Results. Any discrepancy, where Actual is less than Desired, could constitute the performance improvement zone. Performance management and improvement can be thought of as a cycle:

Performance management – Organizational Development

Performance coaching where a manager intervenes to give feedback and adjust performance

Performance management – Organizational Development

Performance appraisal where individual performance is formally documented and feedback delivered

Performance management – Organizational Development

A performance problem is any gap between Desired Results and Actual Results. Performance improvement is any effort targeted at closing the gap between Actual Results and Desired Results.

Performance management – Organizational Development

Other organizational development definitions are slightly different. The U.S. Office of Personnel Management (OPM) indicates that Performance Management consists of a system or process whereby:

Performance management – Organizational Development

Performance is rated or measured and the ratings summarized

Performance management – Implementation

Erica Olsen notes that “Many businesses, even those with well-made plans, fail to implement their strategy. Their problem lies in ineffectively managing their employees once their plan is in place. Sure, they’ve conducted surveys, collected data, gone on management retreats to decide on their organization’s direction– even purchased expensive software to manage their process– but somewhere their plan fails.”

Performance management – Long-cycle Performance Management

Long-cycle Performance Management is usually done on an annual, every 6 months, or quarterly basis. From implementations standpoint, this area is the one that has traditionally received the most attention. This is so for historical reasons, as most performance management techniques/styles predate use of computers.

Performance management – Short-cycle Performance Management

Short-cycle Performance Management (which overlaps with principles of [Agile Software Development]) is usually done on a weekly, by-weekly, or monthly basis. From the implementation standpoint, this sort of management is industry-specific.

Performance management – Micro Performance Management

Micro Performance management is generally done on a by-minute/hour/day basis.

Performance management – Further reading

Business Intelligence and Performance Management: Theory, Systems, and Industrial Applications, P. Rausch, A. Sheta, A. Ayesh (Eds.), Springer Verlag U.K., 2013, ISBN 978-1-4471-4865-4.

Performance management – Further reading

Performance Management: Changing Behavior That Drives Organizational Effectiveness], 4th ed., Dr. Aubrey C. Daniels. Performance Management Publications, 1981, 1984, 1989, 2006. ISBN 0-937100-08-0

Performance management – Further reading

Performance Management – Integrating Strategy Execution, Methodologies, Risk, and Analytics. Gary Cokins, John Wiley & Sons, Inc. 2009. ISBN 978-0-470-44998-1

Performance management – Further reading

Journal of Organizational Behavior Management, Routledge Taylor & Francis Group. Published quarterly. 2009.

Performance management – Further reading

Handbook of Organizational Performance, Thomas C. Mawhinney, William K. Redmon & Carl Merle Johnson. Routledge. 2001.

Performance management – Further reading

Bringing out the Best in People, Aubrey C. Daniels. McGraw-Hill; 2nd edition. 1999. ISBN 978-0071351454

Performance management – Further reading

Improving Performance: How to Manage the White Space in the Organization Chart, Geary A. Rummler & Alan P. Brache. Jossey-Bass; 2nd edition. 1995.

Performance management – Further reading

Human Competence: Engineering Worthy Performance, Thomas F. Gilbert. Pfeiffer. 1996.

Performance management – Further reading

The Values-Based Safety Process: Improving Your Safety Culture with Behavior-Based Safety, Terry E. McSween. John Wiley & Sons. 1995.

Performance management – Further reading

Performance-based Instruction: Linking Training to Business Results, Dale Brethower & Karolyn Smalley. Pfeiffer; Har/Dis edition. 1998.

Performance management – Further reading

Handbook of Applied Behavior Analysis, John Austin & James E. Carr. Context Press. 2000.

Mergers and acquisitions – Improving financial performance

The dominant rationale used to explain M&A activity is that acquiring firms seek improved financial performance. The following motives are considered to improve financial performance:

Mergers and acquisitions – Improving financial performance

Economy of scale: This refers to the fact that the combined company can often reduce its fixed costs by removing duplicate departments or operations, lowering the costs of the company relative to the same revenue stream, thus increasing profit margins.

Mergers and acquisitions – Improving financial performance

Economy of scope: This refers to the efficiencies primarily associated with demand-side changes, such as increasing or decreasing the scope of marketing and distribution, of different types of products.

Mergers and acquisitions – Improving financial performance

Increased revenue or market share: This assumes that the buyer will be absorbing a major competitor and thus increase its market power (by capturing increased market share) to set prices.

Mergers and acquisitions – Improving financial performance

Cross-selling: For example, a bank buying a stock broker could then sell its banking products to the stock broker’s customers, while the broker can sign up the bank’s customers for brokerage accounts. Or, a manufacturer can acquire and sell complementary products.

Mergers and acquisitions – Improving financial performance

Synergy: For example, managerial economies such as the increased opportunity of managerial specialization. Another example is purchasing economies due to increased order size and associated bulk-buying discounts.

Mergers and acquisitions – Improving financial performance

Taxation: A profitable company can buy a loss maker to use the target’s loss as their advantage by reducing their tax liability. In the United States and many other countries, rules are in place to limit the ability of profitable companies to “shop” for loss making companies, limiting the tax motive of an acquiring company.

Mergers and acquisitions – Improving financial performance

Geographical or other diversification: This is designed to smooth the earnings results of a company, which over the long term smoothens the stock price of a company, giving conservative investors more confidence in investing in the company. However, this does not always deliver value to shareholders .

Mergers and acquisitions – Improving financial performance

Resource transfer: resources are unevenly distributed across firms (Barney, 1991) and the interaction of target and acquiring firm resources can create value through either overcoming information asymmetry or by combining scarce resources.

Mergers and acquisitions – Improving financial performance

Vertical integration: Vertical integration occurs when an upstream and downstream firm merge (or one acquires the other)

Mergers and acquisitions – Improving financial performance

Hiring: some companies use acquisitions as an alternative to the normal hiring process. This is especially common when the target is a small private company or is in the startup phase. In this case, the acquiring company simply hires (“acquhires”) the staff of the target private company, thereby acquiring its talent (if that is its main asset and appeal). The target private company simply dissolves and little legal issues are involved.

Mergers and acquisitions – Improving financial performance

Absorption of similar businesses under single management: similar portfolio invested by two different mutual funds namely united money market fund and united growth and income fund, caused the management to absorb united money market fund into united growth and income fund.

Software bug – Performance bugs

Too high computational complexity of algorithm.

Burn down chart – Measuring performance

Actual Work Line is above the Ideal Work Line If the actual work line is above the ideal work line, it means that there is more work left than originally predicted and the project is behind schedule.

Burn down chart – Measuring performance

Actual Work Line is below the Ideal Work Line If the actual work line is below the ideal work line, it means that there is less work left than originally predicted and the project is ahead of schedule.

Burn down chart – Measuring performance

The above table is only one way of interpreting the shape of the burn down chart. There are others.

Earned value management – Simple implementations (emphasizing only technical performance)

The first step is to define the work

Earned value management – Simple implementations (emphasizing only technical performance)

The second step is to assign a value, called planned value (PV), to each activity

Earned value management – Simple implementations (emphasizing only technical performance)

The third step is to define “earning rules” for each activity

Earned value management – Simple implementations (emphasizing only technical performance)

In fact, waiting to update EV only once per month (simply because that is when cost data are available) only detracts from a primary benefit of using EVM, which is to create a technical performance scoreboard for the project team.

Earned value management – Simple implementations (emphasizing only technical performance)

If these three home construction projects were measured with the same PV valuations, the relative schedule performance of the projects can be easily compared.

Earned value management – Intermediate implementations (integrating technical and schedule performance)

A second layer of EVM skill can be very helpful in managing the schedule performance of these “intermediate” projects

Earned value management – Intermediate implementations (integrating technical and schedule performance)

However, EVM schedule performance, as illustrated in Figure 2 provides an additional indicator — one that can be communicated in a single chart

Earned value management – Intermediate implementations (integrating technical and schedule performance)

Although such intermediate implementations do not require units of currency (e.g., dollars), it is common practice to use budgeted dollars as the scale for PV and EV. It is also common practice to track labor hours in parallel with currency. The following EVM formulas are for schedule management, and do not require accumulation of actual cost (AC). This is important because it is common in small and intermediate size projects for true costs to be unknown or unavailable.

Earned value management – Intermediate implementations (integrating technical and schedule performance)

SV greater than 0 is good (ahead of schedule). The SV will be 0 at project completion because then all of the planned values will have been earned.

Earned value management – Intermediate implementations (integrating technical and schedule performance)

However, Schedule Variance (SV) measured through EVM method is indicative only. To know whether a project is really behind or ahead of schedule (on time completion), Project Manager has to perform critical path analysis based on precedence and inter-dependencies of the project activities.

Earned value management – Intermediate implementations (integrating technical and schedule performance)

SPI greater than 1 is good (ahead of schedule).

Earned value management – Intermediate implementations (integrating technical and schedule performance)

See also earned schedule for a description of known limitations in SV and SPI formulas and an emerging practice for correcting these limitations.

Earned value management – Advanced implementations (integrating cost, schedule and technical performance)

To measure cost performance, planned value (or BCWS – Budgeted Cost of Work Scheduled) and earned value (or BCWP – Budgeted Cost of Work Performed) must be in units of currency (the same units that actual costs are measured.) In large implementations, the planned value curve is commonly called a Performance Measurement Baseline (PMB) and may be arranged in control accounts, summary-level planning packages, planning packages and work packages

Earned value management – Advanced implementations (integrating cost, schedule and technical performance)

In the United States, the primary standard for full-featured EVM systems is the ANSI/EIA-748A standard, published in May 1998 and reaffirmed in August 2002. The standard defines 32 criteria for full-featured EVM system compliance. As of the year 2007, a draft of ANSI/EIA-748B, a revision to the original is available from ANSI. Other countries have established similar standards.

Earned value management – Advanced implementations (integrating cost, schedule and technical performance)

In addition to using BCWS and BCWP, prior to 1998 implementations often use the term Actual Cost of Work Performed (ACWP) instead of AC. Additional acronyms and formulas include:

Earned value management – Advanced implementations (integrating cost, schedule and technical performance)

Budget at completion (BAC): The total planned value (PV or BCWS) at the end of the project. If a project has a Management Reserve (MR), it is typically not included in the BAC, and respectively, in the Performance Measurement Baseline.

Earned value management – Advanced implementations (integrating cost, schedule and technical performance)

CV greater than 0 is good (under budget).

Earned value management – Advanced implementations (integrating cost, schedule and technical performance)

CPI greater than 1 is good (under budget):

Earned value management – Advanced implementations (integrating cost, schedule and technical performance)

Having a CPI that is very high (in some cases, very high is only 1.2) may mean that the plan was too conservative, and thus a very high number may in fact not be good, as the CPI is being measured against a poor baseline. Management or the customer may be upset with the planners as an overly conservative baseline ties up available funds for other purposes, and the baseline is also used for manpower planning.

Earned value management – Advanced implementations (integrating cost, schedule and technical performance)

The TCPI provides a projection of the anticipated performance required to achieve either the BAC or the EAC. TCPI indicates the future required cost efficiency needed to achieve a target BAC (Budget At Complete) or EAC (Estimate At Complete). Any significant difference between CPI, the cost performance to date, and the TCPI, the cost performance needed to meet the BAC or the EAC, should be accounted for by management in their forecast of the final cost.

Earned value management – Advanced implementations (integrating cost, schedule and technical performance)

For the TCPI based on BAC (describing the performance required to meet the original BAC budgeted total):

Earned value management – Advanced implementations (integrating cost, schedule and technical performance)

or for the TCPI based on EAC (describing the performance required to meet a new, revised budget total EAC):

Earned value management – Advanced implementations (integrating cost, schedule and technical performance)

The IEAC is a metric to project total cost using the performance to date to project overall performance. This can be compared to the EAC, which is the manager’s projection.

Earned value management – Schedule Performance

The use of SPI in EVM is rather limited in forecasting schedule performance problems because it is dependent on the completion of earned value on the Critical Time Path(CTP).

Earned value management – Schedule Performance

Because Agile EVM is used in a complex environment, any earned value is more likely to be on the CTP. The latest estimate for the number of fixed time intervals can be calculated in Agile EVM as:

Earned value management – Schedule Performance

Initial Duration in number of fixed time intervals / SPI or;

Earned value management – Schedule Performance

Latest Estimate in total number of Story Points / Velocity.

Performance engineering

Performance engineering within systems engineering, encompasses the set of roles, skills, activities, practices, tools, and deliverables applied at every phase of the systems development life cycle which ensures that a solution will be designed, implemented, and operationally supported to meet the non-functional performance requirements defined for the solution.

Performance engineering

As such, the term is typically used to describe the processes, people and technologies required to effectively test non-functional requirements, ensure adherence to service levels and optimize application performance prior to deployment.

Performance engineering

The term performance engineering encompasses more than just the software and supporting infrastructure, and as such the term performance engineering is preferable from a macro view. Adherence to the non-functional requirements is also validated post-deployment by monitoring the production systems. This is part of IT service management (see also ITIL).

Performance engineering

Performance engineering has become a separate discipline at a number of large corporations, with tasking separate but parallel to Systems Engineering. It is pervasive, involving people from multiple organizational units; but predominantly within the information technology organization.

Performance engineering – Performance engineering objectives

Increase business revenue by ensuring the system can process transactions within the requisite timeframe

Performance engineering – Performance engineering objectives

Eliminate system failure requiring scrapping and writing off the system development effort due to performance objective failure

Performance engineering – Performance engineering objectives

Eliminate late system deployment due to performance issues

Performance engineering – Performance engineering objectives

Eliminate avoidable system rework due to performance issues

Performance engineering – Performance engineering objectives

Avoid additional and unnecessary hardware acquisition costs

Performance engineering – Performance engineering objectives

Reduce increased software maintenance costs due to performance problems in production

Performance engineering – Performance engineering objectives

Reduce additional operational overhead for handling system issues due to performance problems

Performance engineering – Performance engineering approach

Because this discipline is applied within multiple methodologies, the following activities will occur within differently specified phases. However if the phases of the rational unified process (RUP) are used as a framework, then the activities will occur as follows:

Performance engineering – Inception

During this first conceptual phase of a program or project, critical business processes are identified. Typically they are classified as critical based upon revenue value, cost savings, or other assigned business value. This classification is done by the business unit, not the IT organization.

Performance engineering – Inception

High level risks that may impact system performance are identified and described at this time. An example might be known performance risks for a particular vendor system.

Performance engineering – Inception

Finally performance activities, roles, and deliverables are identified for the Elaboration phase. Activities and resource loading are incorporated into the Elaboration phase project plans.

Performance engineering – Elaboration

During this defining phase, the critical business processes are decomposed to critical use cases. Such use cases will be decomposed further, as needed, to single page (screen) transitions. These are the use cases that will be subjected to script driven performance testing.

Performance engineering – Elaboration

The type of requirements that relate to Performance Engineering are the non-functional requirements, or NFR. While a functional requirement relates to what business operations are to be performed, a performance related non-functional requirement will relate to how fast that business operation performs under defined circumstances.

Performance engineering – Elaboration

The concept of “defined circumstances” is vital. For example:

Performance engineering – Elaboration

Invalid – the system should respond to user input within 10 seconds.

Performance engineering – Elaboration

Valid – for use case ABC the system will respond to a valid user entry within 5 seconds for a median load of 250 active users and 2000 logged in users 95% of the time; or within 10 seconds for a peak load of 500 active users and 4000 logged in users 90% of the time.

Performance engineering – Elaboration

Testers may build a reliable performance test for the second example, but not for the invalid example.

Performance engineering – Elaboration

Each critical use case must have an associated NFR. If, for a given use case, no existing NFR is applicable, a new NFR specific to that use case must be created.

Performance engineering – Elaboration

Non functional requirements are not limited to use cases

Performance engineering – Elaboration

The system volumetrics documented in the NFR documentation will be used as inputs for both load testing and stress testing of the system during the performance test. Computer scientist have been using all kinds of approaches, e.g., Queueing Theory, to develop performance evaluation models.

Performance engineering – Elaboration

At this point it is suggested that performance modeling be performed using the use case information as input. This may be done using a performance lab, and using prototypes and mockups of the “to be” system; or a vendor provided modeling tool may be used; or even merely a spreadsheet workbook, where each use case is modeled in a single sheet, and a summary sheet is used to provide high level information for all of the use cases.

Performance engineering – Elaboration

It is recommended that Unified Modeling Language sequence diagrams be generated at the physical tier level for each use case. The physical tiers are represented by the vertical object columns, and the message communication between the tiers by the horizontal arrows. Timing information should be associated with each horizontal arrow; this should correlate with the performance model.

Performance engineering – Elaboration

Some performance engineering activities related to performance testing should be executed in this phase. They include validating a performance test strategy, developing a performance test plan, determining the sizing of test data sets, developing a performance test data plan, and identifying performance test scenarios.

Performance engineering – Elaboration

For any system of significant impact, a monitoring plan and a monitoring design are developed in this phase. Performance engineering applies a subset of activities related to performance monitoring, both for the performance test environment as well as for the production environment.

Performance engineering – Elaboration

The risk document generated in the previous phase is revisited here. A risk mitigation plan is determined for each identified performance risk; and time, cost, and responsibility is determined and documented.

Performance engineering – Elaboration

Finally performance activities, roles, and deliverables are identified for the Construction phase. Activities and resource loading are incorporated into the Construction phase project plans. These will be elaborated for each iteration.

Performance engineering – Construction

Early in this phase a number of performance tool related activities are required. These include:

Performance engineering – Construction

Identify key development team members as subject matter experts for the selected tools

Performance engineering – Construction

Specify a profiling tool for the development/component unit test environment

Performance engineering – Construction

Specify an automated unit (component) performance test tool for the development/component unit test environment; this is used when no GUI yet exists to drive the components under development

Performance engineering – Construction

Specify an automated tool for driving server-side unit (components) for the development/component unit test environment

Performance engineering – Construction

Specify an automated multi-user capable script-driven end-to-end tool for the development/component unit test environment; this is used to execute screen-driven use cases

Performance engineering – Construction

Identify a database test data load tool for the development/component unit test environment; this is required to ensure that the database optimizer chooses correct execution paths and to enable reinitializing and reloading the database as needed

Performance engineering – Construction

Presentations and training must be given to development team members on the selected tools

Performance engineering – Construction

A member of the performance engineering practice and the development technical team leads should work together to identify performance-oriented best practices for the development team. Ideally the development organization should already have a body of best practices, but often these do not include or emphasize those best practices that impact system performance.

Performance engineering – Construction

The concept of application instrumentation should be introduced here with the participation of the IT Monitoring organization. Several vendor monitoring systems have performance capabilities, these normally operate at the operating system, network, and server levels; e.g. CPU utilization, memory utilization, disk I/O, and for J2EE servers the JVM performance including garbage collection.

Performance engineering – Construction

But this type of monitoring does not permit the tracking of use case level performance

Performance engineering – Construction

Then as the performance test team starts to gather data, they should commence tuning the environment more specifically for the system to be deployed

Performance engineering – Construction

The data gathered, and the analyses, will be fed back to the group that does performance tuning

Performance engineering – Construction

However, if for some reason (perhaps proper performance engineering working practices were not applied) there are tests that cannot be tuned into compliance, then it will be necessary to return portions of the system to development for refactoring

Performance engineering – Construction

For example: suppose we can improve 70% of a module by parallelizing it, and run on 4 CPUs instead of 1 CPU. If ? is the fraction of a calculation that is sequential, and (1-?) is the fraction that can be parallelized, then the maximum speedup that can be achieved by using P processors is given according to Amdahl’s Law:

Performance engineering – Construction

In this example we would get: 1/(.3+(1-.3)/4)=2.105. So for quadrupling the processing power we only doubled the performance (from 1 to 2.105). And we are now well on the way to diminishing returns. If we go on to double the computing power again from 4 to 8 processors we get 1/(.3+(1-.3)/8)=2.581. So now by doubling the processing power again we only got a performance improvement of about one fifth (from 2.105 to 2.581).

Performance engineering – Transition

During this final phase the system is deployed to the production environment. A number of preparatory steps are required. These include:

Performance engineering – Transition

Configuring the operating systems, network, servers (application, web, database, load balancer, etc.), and any message queueing software according to the base checklists and the optimizations identified in the performance test environment

Performance engineering – Transition

Ensuring all performance monitoring software is deployed and configured

Performance engineering – Transition

Running Statistics on the database after the production data load is completed

Performance engineering – Transition

Once the new system is deployed, ongoing operations pick up performance activities, including:

Performance engineering – Transition

Validating that weekly and monthly performance reports indicate that critical use cases perform within the specified non functional requirement criteria

Performance engineering – Transition

Where use cases are falling outside of NFR criteria, submit defects

Performance engineering – Transition

Identify projected trends from monthly and quarterly reports, and on a quarterly basis, execute capacity planning management activities

Performance engineering – Service management

In the operational domain (post production deployment) performance engineering focuses primarily within three areas: service level management, capacity management, and problem management.

Performance engineering – Service level management

In the service level management area, performance engineering is concerned with service level agreements and the associated systems monitoring that serves to validate service level compliance, detect problems, and identify trends

Performance engineering – Capacity management

Capacity management is charged with ensuring that additional capacity is added in advance of that point (additional CPUs, more memory, new database indexing, et cetera) so that the trend lines are reset and the system will remain within the specified performance range.

Performance engineering – Problem management

Within the problem management domain, the performance engineering practices are focused on resolving the root cause of performance related problems. These typically involve system tuning, changing operating system or device parameters, or even refactoring the application software to resolve poor performance due to poor design or bad coding practices.

Performance engineering – Monitoring

To ensure that there is proper feedback validating that the system meets the NFR specified performance metrics, any major system needs a monitoring subsystem. The planning, design, installation, configuration, and control of the monitoring subsystem is specified by an appropriately defined Monitoring Process. The benefits are as follows:

Performance engineering – Monitoring

It is possible to establish service level agreements at the use case level.

Performance engineering – Monitoring

It is possible to turn on and turn off monitoring at periodic points or to support problem resolution.

Performance engineering – Monitoring

It enables the generation of regular reports.

Performance engineering – Monitoring

It enables the ability to track trends over time – such as the impact of increasing user loads and growing data sets on use case level performance.

Performance engineering – Monitoring

The trend analysis component of this cannot be undervalued. This functionality, properly implemented, will enable predicting when a given application undergoing gradually increasing user loads and growing data sets will exceed the specified non functional performance requirements for a given use case. This permits proper management budgeting, acquisition of, and deployment of the required resources to keep the system running within the parameters of the non functional performance requirements.

Performance engineering – Further reading

Practical Performance Analyst – Performance Engineering Community & Body Of Knowledge

Performance engineering – Further reading

A Performance Process Maturity Model

Performance engineering – Further reading

Exploring UML for Performance Engineering

Performance engineering – Further reading

Introduction to Modeling Based Performance Engineering

Performance engineering – Further reading

Performance and Scalability of Distributed Software Architectures

Performance engineering – Further reading

The Vicious Cycle of Computer Systems Performance and IT Operational Costs

Performance engineering – Further reading

Gathering Performance Requirements

Hyper-V – Degraded performance for Windows XP VMs

Windows XP frequently accesses CPU’s APIC task-priority register (TPR) when interrupt request level changes, causing a performance degradation when running as guests on Hyper-V. Microsoft has fixed this problem in Windows Server 2003 and later.

Hyper-V – Degraded performance for Windows XP VMs

Intel adds TPR virtualization (FlexPriority) to VT-x on Intel Core 2 step E onwards to alleviate this problem. AMD has a similar feature on AMD-V but uses a new register for the purpose. This however means that the guest has to use different instructions to access this new register. AMD provides a driver called “AMD-V Optimization Driver” that has to be installed on the guest to do that.

Performance metric

In project management, performance metrics are used to assess the health of the project and consist of the measuring of seven criteria: safety, time, cost, resources, scope, quality, and actions.

Performance metric

Developing performance metrics usually follows a process of:

Performance metric

Establishing critical processes/customer requirements

Performance metric

Identifying specific, quantifiable outputs of work

Performance metric

Establishing targets against which results can be scored

Performance metric

A criticism of performance metrics is that when the value of information is computed using mathematical methods, it shows that even performance metrics professionals choose measures that have little value. This is referred to as the “measurement inversion”. For example, metrics seem to emphasize what organizations find immediately measurable — even if those are low value — and tend to ignore high value measurements simply because they seem harder to measure (whether they are or not).

Performance metric

To correct for the measurement inversion other methods, like applied information economics, introduce the “value of information analysis” step in the process so that metrics focus on high-value measures. Organizations where this has been applied find that they define completely different metrics than they otherwise would have and, often, fewer metrics.

Performance metric

There are a variety of ways in which organizations may react to results. This may be to trigger specific activity relating to performance (i.e., an improvement plan) or to use the data merely for statistical information. Often closely tied in with outputs, performance metrics should usually encourage improvement, effectiveness and appropriate levels of control.

Performance metric

Performance metrics are often linked in with corporate strategy and are often derived in order to measure performance against a critical success factor.

VMware ESX – Performance limitations

In terms of performance, virtualization imposes a cost in the additional work the CPU has to perform to virtualize the underlying hardware. Instructions that perform this extra work, and other activities that require virtualization, tend to lie in Operating System calls. In an unmodified Operating System, OS calls introduce the greatest portion of virtualization “overhead”.

VMware ESX – Performance limitations

Paravirtualization or other virtualization techniques may help with these issues. VMware developed the Virtual Machine Interface for this purpose, and selected Operating Systems currently support this. A comparison between full virtualization and paravirtualization for the ESX Server shows that in some cases paravirtualization is much faster.

Server Message Block – WAN performance issues

Microsoft has explained that performance issues come about primarily because SMB 1.0 is a block-level rather than a streaming protocol, that was originally designed for small LANs; it has a block size that is limited to 64K, SMB signing creates an additional overhead and the TCP window size is not optimized for WAN links

Procurement – Procurement performance

The report includes the main procurement performance and operational benchmarks that procurement leaders use to gauge the success of their organizations

HP Application Lifecycle Management – HP Performance Center

HP Performance Center software is an enterprise-class performance testing platform and framework. The solution is used by IT departments to standardize, centralize and conduct performance testing. HP Performance Center finds software code flaws across the lifecycle of applications. Built on HP LoadRunner software, HP Performance Center supports developer testing and integrates with HP Application Lifecycle Management.

Business transaction management – Relationship to application performance management

BTM is sometimes categorized as a form of application performance management (APM) or monitoring

History of Apple Inc. – Corporate performance

Under leadership of John Sculley, Apple issued its first corporate stock dividend on May 11, 1987. A month later on June 16, Apple stock split for the first time in a 2:1 split. Apple kept a quarterly dividend with about 0.3% yield until November 21, 1995. Between March 1988 and January 1989, Apple undertook five acquisitions, including software companies Network Innovations, Styleware, Nashoba Systems, and Coral Software, as well as satellite communications company Orion Network Systems.

History of Apple Inc. – Corporate performance

Apple continued to sell both lines of its computers, the Apple II and the Macintosh. A few months after introducing the Mac, Apple released a compact version of the Apple II called the Apple IIc. And in 1986 Apple introduced the Apple IIgs, an Apple II positioned as something of a hybrid product with a mouse-driven, Mac-like operating environment. Even with the release of the first Macintosh, Apple II computers remained the main source of income for Apple for years.

Windows Live OneCare – Performance

Windows Live OneCare Performance Plus is the component that performs monthly PC tune-up related tasks, such as:

Windows Live OneCare – Performance

Disk cleanup and defragmentation.

Windows Live OneCare – Performance

A full virus scan using the anti-virus component in the suite.

Windows Live OneCare – Performance

User notification if files are in need of backing up.

Windows Live OneCare – Performance

Check for Windows updates by using the Microsoft Update service.

Norton 360 – Performance and protection capabilities

Many other reputable sources like Dennis Technology Labs confirm the performance and effectiveness of Norton 2011 and 2012 lines.

Firewall (construction) – Performance based design

Firewalls being used in different application may require different design and performance specifications

Firewall (construction) – Performance based design

Performance based design takes into account the potential conditions during a fire. Understanding thermal limitations of materials is essential to using the correct material for the application.

ZoneAlarm Z100G – Performance

Firewall Throughput – 70 Mbit/s

ZoneAlarm Z100G – Performance

VPN Throughput – 5 Mbit/s (AES)

ZoneAlarm Z100G – Performance

Concurrent Firewall Connections – 4,000

Real-time computing – Real-time and high-performance

Therefore, the most important requirement of a real-time system is predictability and not performance.

Real-time computing – Real-time and high-performance

High-performance is indicative of the amount of processing that is performed in a given amount of time, while real-time is the ability to get done with the processing to yield a useful output in the available time.

Algorithmic efficiency – Benchmarking: measuring performance

Benchmarks can be used by customers when comparing various products from alternative suppliers to estimate which product will best suit their specific requirements in terms of functionality and performance

Algorithmic efficiency – Benchmarking: measuring performance

Some benchmarks provide opportunities for producing an analysis comparing the relative speed of various compiled and interpreted languages for example and The Computer Language Benchmarks Game compares the performance of implementations of typical programming problems in several programming languages.

Algorithmic efficiency – Benchmarking: measuring performance

(Even creating “do it yourself” benchmarks to get at least some appreciation of the relative performance of different programming languages, using a variety of user specified criteria, is quite simple to produce as this “Nine language Performance roundup” by Christopher W. Cowell-Shah demonstrates by example)

Software performance testing

In software engineering, performance testing is in general testing performed to determine how a system performs in terms of responsiveness and stability under a particular workload. It can also serve to investigate, measure, validate or verify other quality attributes of the system, such as scalability, reliability and resource usage.

Software performance testing

Performance testing is a subset of performance engineering, an emerging computer science practice which strives to build performance into the implementation, design and architecture of a system.

Software performance testing – Load testing

Load testing is the simplest form of performance testing

Software performance testing – Stress testing

Stress testing is normally used to understand the upper limits of capacity within the system. This kind of test is done to determine the system’s robustness in terms of extreme load and helps application administrators to determine if the system will perform sufficiently if the current load goes well above the expected maximum.

Software performance testing – Soak testing

Also important, but often overlooked is performance degradation

Software performance testing – Spike testing

Spike testing is done by suddenly increasing the number of or load generated by, users by a very large amount and observing the behaviour of the system. The goal is to determine whether performance will suffer, the system will fail, or it will be able to handle dramatic changes in load.

Software performance testing – Configuration testing

Rather than testing for performance from the perspective of load, tests are created to determine the effects of configuration changes to the system’s components on the system’s performance and behaviour. A common example would be experimenting with different methods of load-balancing.

Software performance testing – Isolation testing

Isolation testing is not unique to performance testing but involves repeating a test execution that resulted in a system problem. Often used to isolate and confirm the fault domain.

Software performance testing – Setting performance goals

Performance testing can serve different purposes.

Software performance testing – Setting performance goals

Or it can measure what parts of the system or workload causes the system to perform badly.

Software performance testing – Setting performance goals

Many performance tests are undertaken without due consideration to the setting of realistic performance goals. The first question from a business perspective should always be “why are we performance testing?”. These considerations are part of the business case of the testing. Performance goals will differ depending on the system’s technology and purpose however they should always include some of the following:

Software performance testing – Server response time

This refers to the time taken for one system node to respond to the request of another. A simple example would be a HTTP ‘GET’ request from browser client to web server. In terms of response time this is what all load testing tools actually measure. It may be relevant to set server response time goals between all nodes of the system.

Software performance testing – Render response time

A difficult thing for load testing tools to deal with as they generally have no concept of what happens within a node apart from recognizing a period of time where there is no activity ‘on the wire’. To measure render response time it is generally necessary to include functional test scripts as part of the performance test scenario which is a feature not offered by many load testing tools.

Software performance testing – Performance specifications

It is critical to detail performance specifications (requirements) and document them in any performance test plan. Ideally, this is done during the requirements development phase of any system development project, prior to any design effort. See Performance Engineering for more details.

Software performance testing – Performance specifications

Without such instrumentation one might have to have someone crouched over Windows Task Manager at the server to see how much CPU load the performance tests are generating (assuming a Windows system is under test).

Software performance testing – Performance specifications

Performance testing can be performed across the web, and even done in different parts of the country, since it is known that the response times of the internet itself vary regionally

Software performance testing – Performance specifications

It is always helpful to have a statement of the likely peak numbers of users that might be expected to use the system at peak times. If there can also be a statement of what constitutes the maximum allowable 95 percentile response time, then an injector configuration could be used to test whether the proposed system met that specification.

Software performance testing – Questions to ask

Performance specifications should ask the following questions, at a minimum:

Software performance testing – Questions to ask

In detail, what is the performance test scope? What subsystems, interfaces, components, etc. are in and out of scope for this test?

Software performance testing – Questions to ask

For the user interfaces (UIs) involved, how many concurrent users are expected for each (specify peak vs. nominal)?

Software performance testing – Questions to ask

What does the target system (hardware) look like (specify all server and network appliance configurations)?

Software performance testing – Questions to ask

What is the Application Workload Mix of each system component? (for example: 20% log-in, 40% search, 30% item select, 10% checkout).

Software performance testing – Questions to ask

What is the System Workload Mix? [Multiple workloads may be simulated in a single performance test] (for example: 30% Workload A, 20% Workload B, 50% Workload C).

Software performance testing – Questions to ask

What are the time requirements for any/all back-end batch processes (specify peak vs. nominal)?

Software performance testing – Pre-requisites for Performance Testing

A stable build of the system which must resemble the production environment as closely as is possible.

Software performance testing – Pre-requisites for Performance Testing

The performance testing environment should be isolated from other environments, such as user acceptance testing (UAT) or development: otherwise the results may not be consistent. As a best practice it is always advisable to have a separate performance testing environment resembling the production environment as much as possible.

Software performance testing – Test conditions

In performance testing, it is often crucial (and often difficult to arrange) for the test conditions to be similar to the expected actual use. This is, however, not entirely possible in actual practice. The reason is that the workloads of production systems have a random nature, and while the test workloads do their best to mimic what may happen in the production environment, it is impossible to exactly replicate this workload variability – except in the most simple system.

Software performance testing – Test conditions

Due to the complexity and financial and time requirements around this activity, some organizations now employ tools that can monitor and create production-like conditions (also referred as “noise”) in their performance testing environments (PTE) to understand capacity and resource requirements and verify / validate quality attributes.

Software performance testing – Timing

performance test environment acquisition and preparation is often a lengthy and time consuming process.

Software performance testing – Tools

In the diagnostic case, software engineers use tools such as profilers to measure what parts of a device or software contributes most to the poor performance or to establish throughput levels (and thresholds) for maintained acceptable response time.

Software performance testing – Technology

The test result shows how the performance varies with the load, given as number of users vs response time

Software performance testing – Technology

Performance testing can be combined with stress testing, in order to see what happens when an acceptable load is exceeded –does the system crash? How long does it take to recover if a large load is reduced? Does it fail in a way that causes collateral damage?

Software performance testing – Technology

It is therefore much faster and cheaper than performance testing, though it requires thorough understanding of the hardware platforms.

Software performance testing – Tasks to undertake

Tasks to perform such a test would include:

Software performance testing – Tasks to undertake

Decide whether to use internal or external resources to perform the tests, depending on inhouse expertise (or lack thereof)

Software performance testing – Tasks to undertake

Gather or elicit performance requirements (specifications) from users and/or business analysts

Software performance testing – Tasks to undertake

Develop a high-level plan (or project charter), including requirements, resources, timelines and milestones

Software performance testing – Tasks to undertake

Develop a detailed performance test plan (including detailed scenarios and test cases, workloads, environment info, etc.)

Software performance testing – Tasks to undertake

Specify test data needed and charter effort (often overlooked, but often the death of a valid performance test)

Software performance testing – Tasks to undertake

Develop proof-of-concept scripts for each application/component under test, using chosen test tools and strategies

Software performance testing – Tasks to undertake

Develop detailed performance test project plan, including all dependencies and associated time-lines

Software performance testing – Tasks to undertake

Install and configure injectors/controller

Software performance testing – Tasks to undertake

Configure the test environment (ideally identical hardware to the production platform), router configuration, quiet network (we don’t want results upset by other users), deployment of server instrumentation, database test sets developed, etc.

Software performance testing – Tasks to undertake

Execute tests – probably repeatedly (iteratively) in order to see whether any unaccounted for factor might affect the results

Software performance testing – Tasks to undertake

Analyze the results – either pass/fail, or investigation of critical path and recommendation of corrective action

Software performance testing – Performance testing web applications

Activity 1

Software performance testing – Performance testing web applications

Additionally, identify project success criteria that may not be captured by those goals and constraints; for example, using performance tests to evaluate what combination of configuration settings will result in the most desirable performance characteristics.

Software performance testing – Performance testing web applications

Activity 3. Plan and Design Tests. Identify key scenarios, determine variability among representative users and how to simulate that variability, define test data, and establish metrics to be collected. Consolidate this information into one or more models of system usage to be implemented, executed, and analyzed.

Software performance testing – Performance testing web applications

Activity 4. Configure the Test Environment. Prepare the test environment, tools, and resources necessary to execute each strategy as features and components become available for test. Ensure that the test environment is instrumented for resource monitoring as necessary.

Software performance testing – Performance testing web applications

Activity 6. Execute the Test. Run and monitor your tests. Validate the tests, test data, and results collection. Execute validated tests for analysis while monitoring the test and the test environment.

Software performance testing – Performance testing web applications

Activity 7. Analyze Results, Tune, and Retest. Analyse, Consolidate and share results data. Make a tuning change and retest. Improvement or degradation? Each improvement made will return smaller improvement than the previous improvement. When do you stop? When you reach a CPU bottleneck, the choices then are either improve the code or add more CPU.

Web testing – Web application performance tool

By doing so, the tool is useful to check for bottleneck and performance leakage in the website or web application being tested.

Web testing – Web application performance tool

A WAPT faces various challenges during testing and should be able to conduct tests for:

Web testing – Web application performance tool

Windows application compatibility where required

Web testing – Web application performance tool

WAPT allows a user to specify how virtual users are involved in the testing environment.ie either increasing users or constant users or periodic users load. Increasing user load, step by step is called RAMP where virtual users are increased from 0 to hundreds. Constant user load maintains specified user load at all time. Periodic user load tends to increase and decrease the user load from time to time.

For More Information, Visit:

store.theartofservice.com/the-performance-toolkit.html

store.theartofservice.com/the-performance-toolkit.html