This program unpacks Playstation 3 Theme files (.p3t) so that you can touch-up an existing theme to your likings or use a certain wallpaper from it (as many themes have multiple). But remember, if you use content from another theme and release it, be sure to give credit!
Download p3textractor.zip from above. Extract the files to a folder with a program such as WinZip or WinRAR. Now there are multiple ways to extract the theme.
The first way is to simply open the p3t file with p3textractor.exe. If you don’t know how to do this, right click the p3t file and select Open With. Alternatively, open the p3t file and it will ask you to select a program to open with. Click Browse and find p3textractor.exe from where you previously extracted it to. It will open CMD and extract the theme to extracted.[filename]. After that, all you need to do for any future p3t files is open them and it will extract.
The second way is very simple. Just drag the p3t file to p3textractor.exe. It will open CMD and extract the theme to extracted.[filename].
For the third way, first put the p3t file you want to extract into the same folder as p3textractor.exe. Open CMD and browse to the folder with p3extractor.exe. Enter the following: p3textractor filename.p3t [destination path]Replace filename with the name of the p3t file, and replace [destination path] with the name of the folder you want the files to be extracted to. A destination path is not required. By default it will extract to extracted.filename.
The examples and perspective in this article may not represent a full view of the subject. Please improve this article and discuss the issue on the talk page.(April 2018)
Simplicity is the state or quality of being simple. Something easy to understand or explain seems simple, in contrast to something complicated. Alternatively, as Herbert A. Simon suggests, something is simple or complex depending on the way we choose to describe it.[1] In some uses, the label "simplicity" can imply beauty, purity, or clarity. In other cases, the term may suggest a lack of nuance or complexity relative to what is required.
The concept of simplicity is related to the field of epistemology and philosophy of science (e.g., in Occam's razor). Religions also reflect on simplicity with concepts such as divine simplicity. In human lifestyles, simplicity can denote freedom from excessive possessions or distractions, such as having a simple living style. In some cases, the term may have negative connotations, as when referring to someone as a simpleton.
There is a widespread philosophical presumption that simplicity is a theoretical virtue. This presumption that simpler theories are preferable appears in many guises. Often it remains implicit; sometimes it is invoked as a primitive, self-evident proposition; other times it is elevated to the status of a ‘Principle’ and labeled as such (for example, the 'Principle of Parsimony'.[2]
According to Occam's razor, all other things being equal, the simplest theory is most likely true. In other words, simplicity is a meta-scientific criterion by which scientists evaluate competing theories.
A distinction is often made by many persons [by whom?] between two senses of simplicity: syntactic simplicity (the number and complexity of hypotheses), and ontological simplicity (the number and complexity of things postulated). These two aspects of simplicity are often referred to as elegance and parsimony respectively.[3]
John von Neumann defines simplicity as an important esthetic criterion of scientific models:
[...] (scientific model) must satisfy certain esthetic criteria - that is, in relation to how much it describes, it must be rather simple. I think it is worth while insisting on these vague terms - for instance, on the use of word rather. One cannot tell exactly how "simple" simple is. [...] Simplicity is largely a matter of historical background, of previous conditioning, of antecedents, of customary procedures, and it is very much a function of what is explained by it.[4]
The recognition that too much complexity can have a negative effect on business performance was highlighted in research undertaken in 2011 by Simon Collinson of the Warwick Business School and the Simplicity Partnership, which found that managers who are orientated towards finding ways of making business "simpler and more straightforward" can have a beneficial impact on their organisation.
Most organizations contain some amount of complexity that is not performance enhancing, but drains value out of the company. Collinson concluded that this type of 'bad complexity' reduced profitability (EBITDA) by more than 10%.[5]
Collinson identified a role for "simplicity-minded managers", managers who were "predisposed towards simplicity", and identified a set of characteristics related to the role, namely "ruthless prioritisation", the ability to say "no", willingness to iterate, to reduce communication to the essential points of a message and the ability to engage a team.[5] His report, the Global Simplicity Index 2011, was the first ever study to calculate the cost of complexity in the world's largest organisations.[6]
The Global Simplicity Index identified that complexity occurs in five key areas of an organisation: people, processes, organisational design, strategy, and products and services.[7] As the "global brands report", the research is repeated and published annually.[8]: 3 The 2022 report incorporates a "brand simplicity score" and an "industry simplicity score".[9]
Research by Ioannis Evmoiridis at Tilburg University found that earnings reported by "high simplicity firms" are higher than among other businesses, and that such firms "exhibit[ed] a superior performance during the period 2010 - 2015", whilst requiring lower average capital expenditure and lower leverage.[8]: 18
"Receive with simplicity everything that happens to you." —Rashi (French rabbi, 11th century), citation at the beginning of the film A Serious Man (2009), Coen Brothers
^Baker, Alan (2022), "Simplicity", in Zalta, Edward N. (ed.), The Stanford Encyclopedia of Philosophy (Summer 2022 ed.), Metaphysics Research Lab, Stanford University, retrieved 2023-04-05
^Baker, Alan (2010-02-25). "Simplicity". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy (Fall 2013 ed.). Retrieved 2015-04-26. A distinction is often made between two fundamentally distinct senses of simplicity: syntactic simplicity (roughly, the number and complexity of hypotheses), and ontological simplicity (roughly, the number and complexity of things postulated). [...] These two facets of simplicity are often referred to as elegance and parsimony respectively. [...] The terms 'parsimony' and 'simplicity' are used virtually interchangeably in much of the philosophical literature.
^
von Neumann, John (1955). "Method in the Physical Sciences". In Leary, Lewis (ed.). The Unity of Knowledge. N.J.: Garden City.
Sarkar, S. Ed. (2002). The Philosophy of Science—An Encyclopedia. London, Routledge. simplicity
Schmölders, Claudia (1974). Simplizität, Naivetät, Einfalt – Studien zur ästhetischen Terminologie in Frankreich und in Deutschland, 1674–1771. PDF, 37MB(in German)
Wilson, R. A. a. K., Frank C., (1999). The MIT Encyclopedia of the Cognitive Sciences. Cambridge, Massachusetts, The MIT Press. parsimony and simplicity p. 627–629.
If Not God, Then What? (2007) by Joshua Fost, p. 93
This program unpacks Playstation 3 Theme files (.p3t) so that you can touch-up an existing theme to your likings or use a certain wallpaper from it (as many themes have multiple). But remember, if you use content from another theme and release it, be sure to give credit!
Download p3textractor.zip from above. Extract the files to a folder with a program such as WinZip or WinRAR. Now there are multiple ways to extract the theme.
The first way is to simply open the p3t file with p3textractor.exe. If you don’t know how to do this, right click the p3t file and select Open With. Alternatively, open the p3t file and it will ask you to select a program to open with. Click Browse and find p3textractor.exe from where you previously extracted it to. It will open CMD and extract the theme to extracted.[filename]. After that, all you need to do for any future p3t files is open them and it will extract.
The second way is very simple. Just drag the p3t file to p3textractor.exe. It will open CMD and extract the theme to extracted.[filename].
For the third way, first put the p3t file you want to extract into the same folder as p3textractor.exe. Open CMD and browse to the folder with p3extractor.exe. Enter the following: p3textractor filename.p3t [destination path]Replace filename with the name of the p3t file, and replace [destination path] with the name of the folder you want the files to be extracted to. A destination path is not required. By default it will extract to extracted.filename.
This article is about the top division in men's Spanish football. For top division in women's Spanish football, see Liga F. For other uses, see Liga (disambiguation).
The Campeonato Nacional de Liga de Primera División,[a] commonly known as the Primera División[b] or La Liga[c][2] and officially as LaLiga EA Sports[d][3] since 2023 for sponsorship reasons, is the top men's professional football division of the Spanish football league system. It is controlled by the Liga Nacional de Fútbol Profesional and is contested by 20 teams over a 38-matchday period.
Since its inception, 62 teams have competed in La Liga. Nine teams have been crowned champions, with Barcelona winning the inaugural La Liga and Real Madrid winning the title a record 36 times. Real Madrid are also the most recent winners, having won the 2023–24 edition. During the 1940s Valencia, Atlético Madrid and Barcelona emerged as the strongest clubs, winning several titles. Real Madrid and Barcelona dominated the championship in the 1950s, each winning four La Liga titles during the decade. During the 1960s and 1970s, Real Madrid dominated La Liga, winning fourteen titles, with Atlético Madrid winning four.[4] During the 1980s and 1990s Real Madrid were prominent in La Liga, but the Basque clubs of Athletic Club and Real Sociedad had their share of success, each winning two Liga titles. From the 1990s onward, Barcelona have dominated La Liga, winning seventeen titles to date.[5] Although Real Madrid has been prominent, winning eleven titles, La Liga has also seen other champions, including Valencia and Deportivo La Coruña.
According to UEFA's league coefficient rankings, La Liga was the top league in Europe in each of the seven years from 2013 to 2019 (calculated using accumulated figures from five preceding seasons) and led Europe for 22 of the 60 ranked years up to 2019, more than any other country. It also produced the continent's top-rated club more times (22) than any other league in that period, more than double that of second-placed Serie A (Italy), including the top club in 10 of the 11 seasons between 2009 and 2019; each of these pinnacles was achieved by either Barcelona or Real Madrid. La Liga clubs have won the most UEFA Champions League (20), UEFA Europa League (14), UEFA Super Cup (16) and FIFA Club World Cup (8) titles, and its players have accumulated the highest number of Ballon d'Or awards (24), The Best FIFA Men's Player awards (19)[e] and UEFA Men's Player of the Year awards (12).[f]
La Liga is one of the most popular professional sports leagues globally, with an average attendance of 26,933 for league matches in the 2018–19 season.[6] This is the eighth-highest of any domestic professional sports league in the world and the third-highest of any professional association football league in the world, behind fellow big five leagues Bundesliga and Premier League, and above Serie A and Ligue 1.[7][8] La Liga is also the seventh wealthiest professional sports league in the world by revenue, after the NFL, MLB, the NBA, the Premier League, the NHL, and the Bundesliga.[9]
From 2008 to 2016, it was sponsored by Banco Bilbao Vizcaya Argentaria and known as Liga BBVA. Then, from 2016 to 2023, it was sponsored by Banco Santander and known as LaLiga Santander. Since 2023, it has been sponsored by Electronic Arts and is known as LaLiga EA Sports.
The competition format follows the usual double round-robin format. During the course of a season, which lasts from August to May, each club plays every other club twice, once at home and once away, for 38 matches. Teams receive three points for a win, one point for a draw, and no points for a loss. Teams are ranked by total points, with the highest-ranked club crowned champion at the end of the season.
A system of promotion and relegation exists between the Primera División and the Segunda División. The three lowest placed teams in La Liga are relegated to the Segunda División, and the top two teams from the Segunda División promoted to La Liga, with an additional club promoted after a series of play-offs involving the third, fourth, fifth and sixth placed clubs. Below is a complete record of how many teams played in each season throughout the league's history;
The top four teams in La Liga qualify for the subsequent season's UEFA Champions League group stage. The winners of the UEFA Champions League and UEFA Europa League also qualify for the subsequent season's UEFA Champions League group stage.
The fifth place team in La Liga and the winner of the Copa del Rey also qualify for the subsequent season's UEFA Europa League group stage. However, if the winner also finished in the top five places in La Liga, then this place reverts to the team that finished sixth in La Liga. Furthermore, the sixth place (or seventh if sixth already qualifies due to the Copa del Rey) team qualifies for the subsequent season's UEFA Conference League play-off round.[12]
The number of places allocated to Spanish clubs in UEFA competitions is dependent upon the position a country holds in the UEFA country coefficients, which are calculated based upon the performance of teams in UEFA competitions in the previous five years. As of the end of the 2023–24 season, the ranking of Spain (and de facto La Liga) is second.[13]
In April 1928, José María Acha, a director at Arenas de Getxo, first proposed the idea of a national league in Spain. After much debate about the size of the league and who would take part, the Real Federación Española de Fútbol eventually agreed on the ten teams who would form the first Primera División in 1929. Arenas, Barcelona, Real Madrid, Athletic Club, Real Sociedad and Real Unión were all selected as previous winners of the Copa del Rey. Atlético Madrid, Espanyol and Europa qualified as Copa del Rey runners-up and Racing de Santander qualified through a knockout competition. Only three of the founding clubs (Real Madrid, Barcelona, and Athletic Club) have never been relegated from the Primera División.
In 1937, the teams in the Republican area of Spain, with the notable exception of the two Madrid clubs, competed in the Mediterranean League and Barcelona emerged as champions. Seventy years later, on 28 September 2007, Barcelona requested the Royal Spanish Football Federation (Spanish acronym RFEF) to recognise that title as a Liga title. This action was taken after RFEF was asked to recognise Levante's Copa de la España Libre win as equivalent to Copa del Rey trophy. Nevertheless, the governing body of Spanish football has not made an outright decision yet.
1940s: Atlético Madrid, Barcelona and Valencia emerge[edit]
Results of the five champions during the post-war years
La Liga champions Copa del Generalísimo La Liga/Copa del Generalísimo double
When the Primera División resumed after the Spanish Civil War, it was Atlético Aviación (nowadays Atlético Madrid), Valencia, and Barcelona that emerged as the strongest clubs. Atlético were only awarded a place during the 1939–40 season as a replacement for Real Oviedo, whose ground had been damaged during the war. The club subsequently won its first Liga title and retained it in 1941. While other clubs lost players to exile, execution, and as casualties of the war, the Atlético team was reinforced by a merger. The young, pre-war squad of Valencia had also remained intact and in the post-war years matured into champions, gaining three Liga titles in 1942, 1944, and 1947. They were also runners-up in 1948 and 1949.
Athletic Bilbao was one of the clubs most affected by the war, since many of its players (sympathizers of the Republican faction) went into exile in Latin America and very few returned. But thanks to a search for young talents, they managed to form the well-known Second historic squad made up of Rafael Iriondo, Venancio Pérez, José Luis Panizo, Agustín Gaínza and the mythical scorer Telmo Zarra (Spanish top scorer in La Liga history, among other records). They won a La Liga and Copa del Generalísimo double in 1943 and won the Cup again in 1944, 1945 and 1950, in addition to an Copa Eva Duarte (official predecessor of the Supercopa). Sevilla also enjoyed a brief golden era, finishing as runners-up in 1940 and 1942 before winning their only title to date in 1946.
Meanwhile, on the other side of Spain, Barcelona began to emerge as a force under the legendary Josep Samitier. A Spanish footballer for both Barcelona and Real Madrid, Samitier cemented his legacy with Barcelona. During his playing career with Barcelona he scored 133 goals, won the inaugural La Liga title and five Copa Del Rey. In 1944, Samitier returned to Barcelona as a coach and guided them in winning their second La Liga title in 1945. Under Samitier and legendary players César Rodríguez, Josep Escolà, Estanislau Basora and Mariano Gonzalvo, Barcelona dominated La Liga in the late 1940s,[14] winning back to back La Liga titles in 1948 and 1949. The 1940s proved to be a successful season for Barcelona, winning three La Liga titles and one Copa Del Rey, but the 1950s proved to be a decade of dominance, not just from Barcelona, but from Real Madrid.
1950s: FC Barcelona and Real Madrid Dominate[edit]
Although Atlético Madrid, previously known as ''Atlético Aviación'', were champions in 1950 and 1951 under mastermind Helenio Herrera, the 1950s continued the success FC Barcelona had during the late 1940s.
During this decade, FC Barcelona's first golden era emerged under coach Ferdinand Daučík, winning back-to-back La Liga and Copa Del Rey doubles in 1951–52 and 1952–53. In 1952, FC Barcelona made history yet again by winning five distinctive trophies in one year. This team, composed of László Kubala, Mariano Gonzalvo, César Rodríguez Álvarez, and Joan Segarra, won La Liga, Copa Del Rey, Copa Eva Duarte (predecessor of Spanish Super Cup), Latin Cup and Copa Martini & Rossi. Their success in winning five trophies in one year earned them the name 'L’equip de les cinc Copes'[15] or The Team of the Five Cups.
In the latter parts of the 1950s, coached by Helenio Herrera and featuring Luis Suárez, Barcelona won yet again back-to-back La Liga's, winning them in 1959 and 1960. In 1959, Barcelona also won another double of La Liga / Copa Del Rey, conquering three doubles in the 1950s.
The 1950s also saw the beginning of the Real Madrid dominance. During the 1930s through the 1950s there were strict limits imposed on foreign players. In most cases, clubs could have only three foreign players in their squads, meaning that at least eight local players had to play in every game. During the 1950s, however, these rules were circumvented by Real Madrid who naturalised Alfredo Di Stéfano and Ferenc Puskás.[citation needed] Di Stéfano, Puskás, Raymond Kopa and Francisco Gento formed the nucleus of the Real Madrid team that dominated the second half of the 1950s. Real Madrid won their third La Liga in 1954 — their first since 1933 — and retained their title in 1955. In 1956, Athletic Club won their sixth La Liga title, but Real Madrid won La Liga again in 1957 and 1958.
All in all, Barcelona and Real Madrid won four La Liga titles each in the 1950s, with Atlético Madrid winning two and Athletic Club winning one during this decade.
Real Madrid dominated La Liga between 1960 and 1980, being crowned champions 14 times.[16] Real Madrid won five La Liga titles in a row from 1961 to 1965 as well as winning three doubles between 1960 and 1980. During the 1960s and 1970s, only Atlético Madrid offered Real Madrid any serious challenge. Atlético Madrid were crowned La Liga champions four times in 1966, 1970, 1973, and 1977. Atlético Madrid also finished second place in 1961, 1963, and 1965. In 1971, Valencia won their fourth La Liga title in 1971 under Alfredo Di Stéfano, and the Johan Cruyff-inspired Barcelona won their ninth La Liga in 1974.
1980s: Real Madrid dominate but the Basque Clubs disrupt their monopoly[edit]
Real Madrid's monopoly in La Liga was interrupted significantly in the 1980s. Although Real Madrid won another five La Liga titles in a row from 1986 to 1990[17] under the brilliance of Emilio Butragueño and Hugo Sánchez, the Basque clubs of Real Sociedad and Athletic Bilbao also dominated the 1980s.[18] Real Sociedad won their first La Liga titles in 1981 and 1982; Luis Arconada, Roberto López Ufarte and Txiki Begiristain stood out from this team. Later, Athletic Bilbao also managed to win two consecutive La Liga titles in 1983 and 1984, also achieving their fifth La Liga and Copa del Rey double in 1984; The stars Andoni Zubizarreta, Santi Urkiaga, Andoni Goikoetxea, Dani, Manuel Sarabia and Estanislao Argote made this success possible. For its part, Barcelona won their tenth La Liga title in 1985 under coach Terry Venables, their first La Liga win since 1974.
Johan Cruyff returned to Barcelona as manager in 1988, and assembled the legendary Dream Team.[19] When Cruyff took control of his Barcelona side, they had won only two La Liga titles in the past 20 years. Cruyff decided to build a team composed of international stars and La Masia graduates in order to restore Barcelona to their former glory days. This team wa
Neon was discovered in 1898 alongside krypton and xenon, identified as one of the three remaining rare inert elements in dry air after the removal of nitrogen, oxygen, argon, and carbon dioxide. Its discovery was marked by the distinctive bright red emission spectrum it exhibited, leading to its immediate recognition as a new element. The name neon originates from the Greek word νέον, a neuter singular form of νέος (neos), meaning 'new'. Neon is a chemically inert gas, with no known uncharged neon compounds. Existing neon compounds are primarily ionic molecules or fragile molecules held together by van der Waals forces.
The synthesis of most neon in the cosmos resulted from the nuclear fusion within stars of oxygen and helium through the alpha-capture process. Despite its abundant presence in the universe and Solar System—ranking fifth in cosmic abundance following hydrogen, helium, oxygen, and carbon—neon is comparatively scarce on Earth. It constitutes about 18.2 ppm of Earth's atmospheric volume and a lesser fraction in the Earth's crust. The high volatility of neon and its inability to form compounds that would anchor it to solids explain its limited presence on Earth and the inner terrestrial planets. Neon’s high volatility facilitated its escape from planetesimals under the early Solar System's nascent Sun's warmth.
In American English, the word is used almost exclusively in its literal sense to describe something that is covered in blood; when used as an intensifier, it is seen by American audiences as a stereotypical marker of a British- or Irish-English speaker, without any significant obscene or profane connotations. Canadian English usage is similar to American English, but use as an expletive adverb may be considered slightly vulgar depending on the circumstances.
Use of the adjective bloody as a profane intensifier predates the 18th century. Its ultimate origin is unclear, and several hypotheses have been suggested. It may be a direct loan of Dutch bloote, (modern spelling blote) meaning entire, complete or pure, which was suggested by Ker (1837) to have been "transformed into bloody, in the consequently absurd phrases of bloody good, bloody bad, bloody thief, bloody angry, etc., where it simply implies completely, entirely, purely, very, truly, and has no relation to either blood or murder, except by corruption of the word."[2]
The word "blood" in Dutch and German is used as part of minced oaths, in abbreviation of expressions referring to "God's blood", i.e. the Passion or the Eucharist. Ernest Weekley (1921) relates English usage to imitation of purely intensive use of Dutch bloed and German Blut in the early modern period.
A popularly reported theory suggested euphemistic derivation from the phrase by Our Lady. The contracted form by'r Lady is common in Shakespeare's plays around the turn of the 17th century, and Jonathan Swift about 100 years later writes both "it grows by'r Lady cold" and "it was bloody hot walking to-day"[3] suggesting that bloody and by'r Lady had become exchangeable generic intensifiers.
However, Eric Partridge (1933) describes the supposed derivation of bloody as a further contraction of by'r lady as "phonetically implausible".
According to Rawson's dictionary of Euphemisms (1995), attempts to derive bloody from minced oaths for "by our lady" or "God's blood" are based on the attempt to explain the word's extraordinary shock power in the 18th to 19th centuries, but they disregard that the earliest records of the word as an intensifier in the 17th to early 18th century do not reflect any taboo or profanity. It seems more likely, according to Rawson, that the taboo against the word arose secondarily, perhaps because of an association with menstruation.[4]
The Oxford English Dictionary prefers the theory that it arose from aristocratic rowdies known as "bloods", hence "bloody drunk" means "drunk as a blood".[5]
Until at least the early 18th century, the word was used innocuously. It was used as an intensifier without apparent implication of profanity by 18th-century authors such as Henry Fielding and Jonathan Swift ("It was bloody hot walking today" in 1713) and Samuel Richardson ("He is bloody passionate" in 1742).
After about 1750 the word assumed more profane connotations. Johnson (1755) already calls it "very vulgar", and the original Oxford English Dictionary article of 1888 comments the word is "now constantly in the mouths of the lowest classes, but by respectable people considered 'a horrid word', on a par with obscene or profane language".[6]
On the opening night of George Bernard Shaw's comedy Pygmalion in 1914, Mrs Patrick Campbell, in the role of Eliza Doolittle, created a sensation with the line "Walk! Not bloody likely!" and this led to a fad for using "Pygmalion" itself as a pseudo-oath, as in "Not Pygmalion likely".[7][8]
Bloody has always been a very common part of Australian speech and has not been considered profane there for some time.[when?]. The word was dubbed "the Australian adjective" by The Bulletin on 18 August 1894. One Australian performer, Kevin Bloody Wilson, has even made it his middle name. Also in Australia, the word bloody is frequently used as a verbal hyphen, or infix, correctly called tmesis as in "fanbloodytastic". In the 1940s an Australian divorce court judge held that "the word bloody is so common in modern parlance that it is not regarded as swearing". Meanwhile, Neville Chamberlain's government was fining Britons for using the word in public.[citation needed] In 2007 an Australian advertising campaign So where the bloody hell are you? was banned on UK televisions and billboards as the term was still considered an expletive.
The word as an expletive is seldom used in the United States of America. In the US the term is usually used when the intention is to mimic an Englishman. Because it is not perceived as profane in American English, "bloody" is not censored when used in American television and film, for example in the 1961 film The Guns of Navarone the actor Richard Harris at one point says: "You can't even see the bloody cave, let alone the bloody guns. And anyway, we haven't got a bloody bomb big enough to smash that bloody rock ..." – but bloody was replaced with ruddy for British audiences of the time.[citation needed]
The term bloody as an intensifier is now overall fairly rare in Canada, though still more common than in the United States.[citation needed] It is more commonly spoken in the Atlantic provinces, particularly Newfoundland.[9] It may be considered mildly vulgar depending on the circumstances.[citation needed]
In Singapore, the word bloody is commonly used as a mild expletive in Singapore's colloquial English. The roots of this expletive derives from the influence and informal language British officers used during the dealing and training of soldiers in the Singapore Volunteer Corps and the early days of the Singapore Armed Forces. When more Singaporeans were promoted officers within the Armed Forces, most new local officers applied similar training methods their former British officers had when they were cadets or trainees themselves. This includes some aspects of British Army lingo, like "bloody (something)". When the newly elected Singapore government implemented compulsory conscription, all 18-year-old able bodied Singapore males had to undergo training within the Armed Forces. When National servicemen completed their service term, some brought the many expletives they picked up during their service into the civilian world and thus became a part of the common culture in the city state.
The word "bloody" also managed to spread up north in neighbouring Malaysia, to where the influence of Singapore English has spread. The use of "bloody" as a substitute for more explicit language increased with the popularity of British and Australian films and television shows aired on local television programmes. The term bloody in Singapore may not be considered explicit, but its usage is frowned upon in formal settings.
The term is frequently used among South Africans in their colloquial English and it is an intensifier. It is used in both explicit and non-explicit ways. It also spread to Afrikaans as "bloedige" and is popular amongst many citizens in the country. It is also used by minors and is not considered to be offensive.
Many substitutions were devised[year needed] to convey the essence of the oath, but with less offence; these included bleeding, bleaking, cruddy, smuddy, blinking, blooming, bally, woundy, flaming and ruddy.
Publications such as newspapers, police reports, and so on may print b⸺y instead of the full profanity.[10]
A spoken language equivalent is blankety or, less frequently, blanked or blanky; the spoken words are all variations of blank, which, as a verbal representation of a dash, is used as a euphemism for a variety of "bad" words.[10]
Use of bloody as an adverbial or generic intensifier is to be distinguished from its fixed use in the expressions "bloody murder" and "bloody hell". In "bloody murder", it has the original sense of an adjective used literally. The King James Version of the Bible frequently uses bloody as an adjective in reference to bloodshed or violent crime, as in "bloody crimes" (Ezekiel 22:2), "Woe to the bloody city" (Ezekiel 24:6, Nahum 3:1). "bloody men" (26:9, Psalms 59:2, 139:19), etc. The expression of "bloody murder" goes back to at least Elizabethan English, as in Shakespeare's Titus Andronicus (c. 1591), "bloody murder or detested rape".
The expression "scream bloody murder" (in the figurative or desemanticised sense of "to loudly object to something" attested since c. 1860)[11] is now considered American English, while in British English, the euphemistic "blue murder" had replaced "bloody murder" during the period of "bloody" being considered taboo.[12]
The expression "bloody hell" is now used as a general expression of surprise or as a general intensifier; e.g. "bloody hell" being used repeatedly in Harry Potter and the Philosopher's Stone (2001, PG Rating). In March 2006 Australia's national tourism commission, Tourism Australia, launched an advertising campaign targeted at potential visitors in several English-speaking countries. The ad sparked controversy because of its ending (in which a cheerful, bikini-clad spokeswoman delivers the ad's call-to-action by saying "...so where the bloody hell are you?"). In the UK the BACC required that a modified version of the ad be shown in the United Kingdom, without the word "bloody".[13] In May 2006 the UK's Advertising Standards Authority ruled that the word bloody was not an inappropriate marketing tool and the original version of the ad was permitted to air. In Canada, the ad's use of "bloody hell" also created controversy.[14][15]
The longer "bloody hell-hounds" appears to have been at least printable in early 19th century Britain.[16] "Bloody hell's flames" as well as "bloody hell" is reported as a profanity supposedly used by Catholics against Protestants in 1845.[17]
^Sterfania Biscetti, "The diachronic development of bloody: a case study in historical pragmatics". In Richard Dury, Maurizio Gotti, Marina Dossena (eds.) English Historical Linguistics 2006 Volume 2: Lexical and semantic change. Amsterdam/Philadelphia: John Benjamins Publishing Company. 2008, p. 55.
^John Bellenden Ker, An Essay on the Archæology of our Popular Phrases and Nursery Rhymes, London:Longman, Rees, Orme, Brown, Green & Co., 1837, pg 36.
^"More likely, the taboo stemmed from the fear that many people have of blood and, in the minds of some, from an association with menstrual bleeding. Whatever, the term was debarred from polite society during the whole of the nineteenth century." Rawson (1995).
^The Oxford English Dictionary. Vol. 1. Oxford: Clarendon Press. 1933. p. 933. 2. As an intensive: Very .... and no mistake, exceedingly; abominably, desperately. In general colloquial use from the Restoration to c1750; now constantly in the mouths of the lowest classes, but by respectable people considered 'a horrid word', on a par with obscene or profane language, and usually printed in the newspapers (in police reports, etc.) 'b⸺y'.
^so in London Theatre: A Collection of the Most Celebrated Dramatic Pieces, Correctly Given, from Copies Used in the Theatres Volumes 11-12 (1815), p. 59 "Bloody hell-hounds, I overheard you!"
^
John Ryan, Popery unmasked. A narrative of twenty years' Popish persecution (1845), p. 44.
In the sociology of knowledge, a controversy over the boundaries of autonomy inhibited analysis of any concept beyond relative autonomy,[3] until a typology of autonomy was created and developed within science and technology studies. According to it, the institution of science's existing autonomy is "reflexive autonomy": actors and structures within the scientific field are able to translate or to reflect diverse themes presented by social and political fields, as well as influence them regarding the thematic choices on research projects.
Institutional autonomy is having the capacity as a legislator to be able to implant and pursue official goals. Autonomous institutions are responsible for finding sufficient resources or modifying their plans, programs, courses, responsibilities, and services accordingly.[4] But in doing so, they must contend with any obstacles that can occur, such as social pressure against cut-backs or socioeconomic difficulties. From a legislator's point of view, to increase institutional autonomy, conditions of self-management and institutional self-governance must be put in place. An increase in leadership and a redistribution of decision-making responsibilities would be beneficial to the research of resources.[5]
Institutional autonomy was often seen as a synonym for self-determination, and many governments feared that it would lead institutions to an irredentist or secessionist region. But autonomy should be seen as a solution to self-determination struggles. Self-determination is a movement toward independence, whereas autonomy is a way to accommodate the distinct regions/groups within a country. Institutional autonomy can diffuse conflicts regarding minorities and ethnic groups in a society. Allowing more autonomy to groups and institutions helps create diplomatic relationships between them and the central government.[6]
In governmental parlance, autonomy refers to self-governance. An example of an autonomous jurisdiction was the former United States governance of the Philippine Islands. The Philippine Autonomy Act of 1916 provided the framework for the creation of an autonomous government under which the Filipino people had broader domestic autonomy than previously, although it reserved certain privileges to the United States to protect its sovereign rights and interests.[7] Other examples include Kosovo (as the Socialist Autonomous Province of Kosovo) under the former Yugoslav government of Marshal Tito[8] and Puntland Autonomous Region within Federal Republic of Somalia.
Although often being territorially defined as self-governments, autonomous self-governing institutions may take a non-territorial form. Such non-territorial solutions are, for example, cultural autonomy in Estonia and Hungary, national minority councils in Serbia or Sámi parliaments in Nordic countries.[9][10]
Immanuel Kant (1724–1804) defined autonomy by three themes regarding contemporary ethics. Firstly, autonomy as the right for one to make their own decisions excluding any interference from others. Secondly, autonomy as the capacity to make such decisions through one's own independence of mind and after personal reflection. Thirdly, as an ideal way of living life autonomously. In summary, autonomy is the moral right one possesses, or the capacity we have in order to think and make decisions for oneself providing some degree of control or power over the events that unfold within one's everyday life.[12]
The context in which Kant addresses autonomy is in regards to moral theory, asking both foundational and abstract questions. He believed that in order for there to be morality, there must be autonomy. "Autonomous" is derived from the Greek word autonomos[13] where 'auto' means self and 'nomos' means to govern (nomos: as can be seen in its usage in nomárchēs which means chief of the province). Kantian autonomy also provides a sense of rational autonomy, simply meaning one rationally possesses the motivation to govern their own life. Rational autonomy entails making your own decisions but it cannot be done solely in isolation. Cooperative rational interactions are required to both develop and exercise our ability to live in a world with others.
Kant argued that morality presupposes this autonomy (German: Autonomie) in moral agents, since moral requirements are expressed in categorical imperatives. An imperative is categorical if it issues a valid command independent of personal desires or interests that would provide a reason for obeying the command. It is hypothetical if the validity of its command, if the reason why one can be expected to obey it, is the fact that one desires or is interested in something further that obedience to the command would entail. "Don't speed on the freeway if you don't want to be stopped by the police" is a hypothetical imperative. "It is wrong to break the law, so don't speed on the freeway" is a categorical imperative. The hypothetical command not to speed on the freeway is not valid for you if you do not care whether you are stopped by the police. The categorical command is valid for you either way. Autonomous moral agents can be expected to obey the command of a categorical imperative even if they lack a personal desire or interest in doing so. It remains an open question whether they will, however.
The Kantian concept of autonomy is often misconstrued, leaving out the important point about the autonomous agent's self-subjection to the moral law. It is thought that autonomy is fully explained as the ability to obey a categorical command independently of a personal desire or interest in doing so—or worse, that autonomy is "obeying" a categorical command independently of a natural desire or interest; and that heteronomy, its opposite, is acting instead on personal motives of the kind referenced in hypothetical imperatives.
In his Groundwork of the Metaphysic of Morals, Kant applied the concept of autonomy also to define the concept of personhood and human dignity. Autonomy, along with rationality, are seen by Kant as the two criteria for a meaningful life. Kant would consider a life lived without these not worth living; it would be a life of value equal to that of a plant or insect.[14] According to Kant autonomy is part of the reason that we hold others morally accountable for their actions. Human actions are morally praise- or blame-worthy in virtue of our autonomy. Non- autonomous beings such as plants or animals are not blameworthy due to their actions being non-autonomous.[14] Kant's position on crime and punishment is influenced by his views on autonomy. Brainwashing or drugging criminals into being law-abiding citizens would be immoral as it would not be respecting their autonomy. Rehabilitation must be sought in a way that respects their autonomy and dignity as human beings.[15]
Friedrich Nietzsche wrote about autonomy and the moral fight.[16] Autonomy in this sense is referred to as the free self and entails several aspects of the self, including self-respect and even self-love. This can be interpreted as influenced by Kant (self-respect) and Aristotle (self-love). For Nietzsche, valuing ethical autonomy can dissolve the conflict between love (self-love) and law (self-respect) which can then translate into reality through experiences of being self-responsible. Because Nietzsche defines having a sense of freedom with being responsible for one's own life, freedom and self-responsibility can be very much linked to autonomy.[17]
The Swiss philosopher Jean Piaget (1896–1980) believed that autonomy comes from within and results from a "free decision". It is of intrinsic value and the morality of autonomy is not only accepted but obligatory. When an attempt at social interchange occurs, it is reciprocal, ideal and natural for there to be autonomy regardless of why the collaboration with others has taken place. For Piaget, the term autonomous can be used to explain the idea that rules are self-chosen. By choosing which rules to follow or not, we are in turn determining our own behaviour.[18]
Piaget studied the cognitive development of children by analyzing them during their games and through interviews, establishing (among other principles) that the children's moral maturation process occurred in two phases, the first of heteronomy and the second of autonomy:
Heteronomous reasoning: Rules are objective and unchanging. They must be literal because the authority are ordering it and do not fit exceptions or discussions. The base of the rule is the superior authority (parents, adults, the State), that it should not give reason for the rules imposed or fulfilled them in any case. Duties provided are conceived as given from oneself. Any moral motivation and sentiments are possible through what one believes to be right.
Autonomous reasoning: Rules are the product of an agreement and, therefore, are modifiable. They can be subject to interpretation and fit exceptions and objections. The base of the rule is its own acceptance, and its meaning has to be explained. Sanctions must be proportionate to the absence, assuming that sometimes offenses can go unpunished, so that collective punishment is unacceptable if it is not the guilty. The circumstances may not punish a guilty. Duties provided are conceived as given from the outside. One follows rules mechanically as it is simply a rule, or as a way to avoid a form of punishment.
The American psychologist Lawrence Kohlberg (1927–1987) continues the studies of Piaget. His studies collected information from different latitudes to eliminate the cultural variability, and focused on the moral reasoning, and not so much in the behavior or its consequences. Through interviews with adolescent and teenage boys, who were to try and solve "moral dilemmas", Kohlberg went on to further develop the stages of moral development. The answers they provided could be one of two things. Either they choose to obey a given law, authority figure or rule of some sort or they chose to take actions that would serve a human need but in turn break this given rule or command.
The most popular moral dilemma asked involved the wife of a man approaching death due to a special type of cancer. Because the drug was too expensive to obtain on his own, and because the pharmacist who discovered and sold the drug had no compassion for him and only wanted profits, he stole it. Kohlberg asks these adolescent and teenage boys (10-, 13- and 16-year-olds) if they think that is what the husband should have done or not. Therefore, depending on their decisions, they provided answers to Kohlberg about deeper rationales and thoughts and determined what they value as important. This value then determined the "structure" of their moral reasoning.[19]
Kohlberg established three stages of morality, each of which is subdivided into two levels. They are read in progressive sense, that is, higher levels indicate greater autonomy.
Level 1: Premoral/Preconventional Morality: Standards are met (or not met) depending on the hedonistic or physical consequences.
[Stage 0: Egocentric Judgment: There is no moral concept independent of individual wishes, including a lack of concept of rules or obligations.]
Stage 1: Punishment-Obedience Orientation: The rule is obeyed only to avoid punishment. Physical consequences determine goodness or badness and power is deferred to unquestioningly with no respect for the human or moral value, or the meaning of these consequences. Concern is for the self.
Stage 2: Instrumental-Relativist Orientation: Morals are individualistic and egocentric. There is an exchange of interests but always under the point of view of satisfying personal needs. Elements of fairness and reciprocity are present but these are interpreted in a pragmatic way, instead of an experience of gratitude or justice. Egocentric in nature but beginning to incorporate the ability to see things from the perspective of others.
Level 2: Conventional Morality/Role Conformity: Rules are obeyed according to the established conventions of a society.
Stage 3: Good Boy–Nice Girl Orientation: Morals are conceived in accordance with the stereotypical social role. Rules are obeyed to obtain the approval of the immediate group and the right actions are judged based on what would please others or give the impression that one is a good person. Actions are evaluated according to intentions.
Stage 4: Law and Order Orientation: Morals are judged in accordance with the authority of the system, or the needs of the social order. Laws and order are prioritized.
Level 3: Postconventional Morality/Self-Accepted Moral Principles: Standards of moral behavior are internalized. Morals are governed by rational judgment, derived from a conscious reflection on the recognition of the value of the individual inside a conventionally established society.
Stage 5: Social Contract Orientation: There are individual rights and standards that have been lawfully established as basic universal values. Rules are agreed upon by through procedure and society comes to consensus through critical examination in order to benefit the greater good.
Stage 6: Universal Principle Orientation: Abstract ethical principles are obeyed on a personal level in addition to societal rules and conventions. Universal principles of justice, reciprocity, equality and human dignity are internalized and if one fails to live up to these ideals, guilt or self-condemnation results.
Robert Audi characterizes autonomy as the self-governing power to bring reasons to bear in directing one's conduct and influencing one's propositional attitudes.[20]: 211–212 [21] Traditionally, autonomy is only concerned with practical matters. But, as Audi's definition suggests, autonomy may be applied to responding to reasons at large, not just to practical reasons. Autonomy is closely related to freedom but the two can come apart. An example would be a political prisoner who is forced to make a statement in favor of his opponents in order to ensure that his loved ones are not harmed. As Audi points out, the prisoner lacks freedom but still has autonomy since his statement, though not reflecting his political ideals, is still an expression of his commitment to his loved ones.[22]: 249
Autonomy is often equated with self-legislation in the Kantian tradition.[23][24] Self-legislation may be interpreted as laying down laws or principles that are to be followed. Audi agrees with this school in the sense that we should bring reasons to bear in a principled way. Responding to reasons by mere whim may still be considered free but not autonomous.[22]: 249, 257 A commitment to principles and projects, on the other hand, provides autonomous agents with an identity over time and gives them a sense of the kind of persons they want to be. But autonomy is neutral as to which principles or projects the agent endorses. So different autonomous agents may follow very different principles.[22]: 258 But, as Audi points out, self-legislation is not sufficient for autonomy since laws that do not have any practical impact do not constitute autonomy.[22]: 247–248 Some form of motivational force or executive power is necessary in order to get from mere self-legislation to self-government.[25] This motivation may be inherent in the corresponding practical judgment itself, a position known as motivational internalism, or may come to the practical judgment externally in the form of some desire independent of the judgment, as motivational externalism holds.[22]: 251–252
In the Humean tradition, intrinsic desires are the reasons the autonomous agent should respond to. This theory is called instrumentalism.[26][27] Audi rejects instrumentalism and suggests that we should adopt a position known as axiological objectivism. The central idea of this outlook is that objective values, and not subjective desires, are the sources of normativity and therefore determine what autonomous agents should do.[22]: 261ff
Autonomy in childhood and adolescence is when one strives to gain a sense of oneself as a separate, self-governing individual.[28] Between ages 1–3, during the second stage of Erikson's and Freud's stages of development, the psychosocial crisis that occurs is autonomy versus shame and doubt.[29] The significant event that occurs during this stage is that children must learn to be autonomous, and failure to do so may lead to the child doubting their own abilities and feel ashamed.[29] When a child becomes autonomous it allows them to explore and acquire new skills. Autonomy has two vital aspects wherein there is an emotional component where one relies more on themselves rather than their parents and a behavioural component where one makes decisions independently by using their judgement.[28] The styles of child rearing affect the development of a child's autonomy. Autonomy in adolescence is closely related to their quest for identity.[28] In adolescence parents and peers act as agents of influence. Peer influence in early adolescence may help the process of an adolescent to gradually become more autonomous by being less susceptible to parental or peer influence as they get older.[29] In adolescence the most important developmental task is to develop a healthy sense of autonomy.[29]
In Christianity, autonomy is manifested as a partial self-governance on various levels of church administration. During the history of Christianity, there were two basic types of autonomy. Some important parishes and monasteries have been given special autonomous rights and privileges, and the best known example of monastic autonomy is the famous Eastern Orthodox monastic community on Mount Athos in Greece. On the other hand, administrative autonomy of entire ecclesiastical provinces has throughout history included various degrees of internal self-governance.
In ecclesiology of Eastern Orthodox Churches, there is a clear distinction between autonomy and autocephaly, since autocephalous churches have full self-governance and independence, while every autonomous church is subject to some autocephalous church, having a certain degree of internal self-governance. Since every autonomous church had its own historical path to ecclesiastical autonomy, there are significant differences between various autonomous churches in respect of their particular degrees of self-governance. For example, churches that are autonomous can have their highest-ranking bishops, such as an archbishop or metropolitan, appointed or confirmed by the patriarch of the mother church from which it was granted its autonomy, but generally they remain self-governing in many other respects.
In the history of Western Christianity the question of ecclesiastical autonomy was also one of the most important questions, especially during the first centuries of Christianity, since various archbishops and metropolitans in Western Europe have often opposed centralizing tendencies of the Church of Rome.[30] As of 2019[update], the Catholic Church comprises 24 autonomous (sui iuris) Churches in communion with the Holy See. Various denominations of Protestant churches usually have more decentralized power, and churches may be autonomous, thus having their own rules or laws of government, at the national, local, or even individual level.
Sartre brings the concept of the Cartesian god being totally free and autonomous. He states that existence precedes essence with god being the creator of the essences, eternal truths and divine will. This pure freedom of god relates to human freedom and autonomy; where a human is not subjected to pre-existing ideas and values.[31]
According to the first amendment, In the United States of America, the federal government is restricted in building a national church. This is due to the first amendment's recognizing people's freedom's to worship their faith according to their own belief's. For example, the American government has removed the church from their "sphere of authority"[32] due to the churches' historical impact on politics and their authority on the public. This was the beginning of the disestablishment process. The Protestant churches in the United States had a significant impact on American culture in the nineteenth century, when they organized the establishment of schools, hospitals, orphanages, colleges, magazines, and so forth.[33] This has brought up the famous, however, misinterpreted term of the separation of church and state. These churches lost the legislative and financial support from the state.
The first disestablishment began with the introduction of the bill of rights.[34] In the twentieth century, due to the great depression of the 1930s and the completion of the second world war, the American churches were revived. Specifically the Protestant churches. This was the beginning of the second disestablishment[34] when churches had become popular again but held no legislative power. One of the reasons why the churches gained attendance and popularity was due to the baby boom, when soldiers came back from the second world war and started their families. The large influx of newborns gave the churches a new wave of followers. However, these followers did not hold the same beliefs as their parents and brought about the political, and religious revolutions of the 1960s.
During the 1960s, the collapse of religious and cultural middle brought upon the third disestablishment.[34] Religion became more important to the individual and less so to the community. The changes brought from these revolutions significantly increased the personal autonomy of individuals due to the lack of structural restraints giving them added freedom of choice. This concept is known as "new voluntarism"[34] where individuals have free choice on how to be religious and the free choice whether to be religious or not.
In a medical context, respect for a patient's personal autonomy is considered one of many fundamental ethical principles in medicine.[35] Autonomy can be defined as the ability of the person to make his or her own decisions. This faith in autonomy is the central premise of the concept of informed consent and shared decision making. This idea, while considered essential to today's practice of medicine, was developed in the last 50 years. According to Tom Beauchamp and James Childress (in Principles of Biomedical Ethics), the Nuremberg trials detailed accounts of horrifyingly exploitative medical "experiments" which violated the subjects' physical integrity and personal autonomy.[36] These incidences prompted calls for safeguards in medical research, such as the Nuremberg Code which stressed the importance of voluntary participation in medical research. It is believed that the Nuremberg Code served as the premise for many current documents regarding research ethics.[37]
Respect for autonomy became incorporated in health care and patients could be allowed to make personal decisions about the health care services that they receive.[38] Notably, autonomy has several aspects as well as challenges that affect health care operations. The manner in which a patient is handled may undermine or support the autonomy of a patient and for this reason, the way a patient is communicated to becomes very crucial. A good relationship between a patient and a health care practitioner needs to be well defined to ensure that autonomy of a patient is respected.[39] Just like in any other life situation, a patient would not like to be under the control of another person. The move to emphasize respect for patient's autonomy rose from the vulnerabilities that were pointed out in regards to autonomy.
However, autonomy does not only apply in a research context. Users of the health care system have the right to be treated with respect for their autonomy, instead of being dominated by the physician.[40] This is referred to as paternalism. While paternalism is meant to be overall good for the patient, this can very easily interfere with autonomy.[41] Through the therapeutic relationship, a thoughtful dialogue between the client and the physician may lead to better outcomes for the client, as he or she is more of a participant in decision-making.
There are many different definitions of autonomy, many of which place the individual in a social context. Relational autonomy, which suggests that a person is defined through their relationships with others, is increasingly considered in medicine and particularly in critical[42] and end-of-life care.[43] Supported autonomy[44] suggests instead that in specific circumstances it may be necessary to temporarily compromise the autonomy of the person in the short term in order to preserve their autonomy in the long-term. Other definitions of the autonomy imagine the person as a contained and self-sufficient being whose rights should not be compromised under any circumstance.[45]
There are also differing views with regard to whether modern health care systems should be shifting to greater patient autonomy or a more paternalistic approach. For example, there are such arguments that suggest the current patient autonomy practiced is plagued by flaws such as misconceptions of treatment and cultural differences, and that health care systems should be shifting to greater paternalism on the part of the physician given their expertise.[46] On the other hand, other approaches suggest that there simply needs to be an increase in relational understanding between patients and health practitioners to improve patient autonomy.[47]
One argument in favor of greater patient autonomy and its benefits is by Dave deBronkart, who believes that in the technological advancement age, patients are capable of doing a lot of their research on medical issues from their home. According to deBronkart, this helps to promote better discussions between patients and physicians during hospital visits, ultimately easing up the workload of physicians.[48] deBronkart argues that this leads to greater patient empowerment and a more educative health care system.[48] In opposition to this view, technological advancements can sometimes be viewed as an unfavorable way of promoting patient autonomy. For example, self-testing medical procedures which have become increasingly common are argued by Greaney et al. to increase patient autonomy, however, may not be promoting what is best for the patient. In this argument, contrary to deBronkart, the current perceptions of patient autonomy are excessively over-selling the benefits of individual autonomy, and is not the most suitable way to go about treating patients.[49] Instead, a more inclusive form of autonomy should be implemented, relational autonomy, which factors into consideration those close to the patient as well as the physician.[49] These different concepts of autonomy can be troublesome as the acting physician is faced with deciding which concept he/she will implement into their clinical practice.[50] It is often references as one of the four pillars of medicine, alongside beneficence, justice and nonmaleficence[51]
Autonomy varies and some patients find it overwhelming especially the minors when faced with emergency situations. Issues arise in emergency room situations where there may not be time to consider the principle of patient autonomy. Various ethical challenges are faced in these situations when time is critical, and patient consciousness may be limited. However, in such settings where informed consent may be compromised, the working physician evaluates each individual case to make the most professional and ethically sound decision.[52] For example, it is believed that neurosurgeons in such situations, should generally do everything they can to respect patient autonomy. In the situation in which a patient is unable to make an autonomous decision, the neurosurgeon should discuss with the surrogate decision maker in order to aid in the decision-making process.[52] Performing surgery on a patient without informed consent is in general thought to only be ethically justified when the neurosurgeon and his/her team render the patient to not have the capacity to make autonomous decisions. If the patient is capable of making an autonomous decision, these situations are generally less ethically strenuous as the decision is typically respected.[52]
Not every patient is capable of making an autonomous decision. For example, a commonly proposed question is at what age children should be partaking in treatment decisions.[53] This question arises as children develop differently, therefore making it difficult to establish a standard age at which children should become more autonomous.[53] Those who are unable to make the decisions prompt a challenge to medical practitioners since it becomes difficult to determine the ability of a patient to make a decision.[54] To some extent, it has been said that emphasis of autonomy in health care has undermined the practice of health care practitioners to improve the health of their patient as necessary. The scenario has led to tension in the relationship between a patient and a health care practitioner. This is because as much as a physician wants to prevent a patient from suffering, they still have to respect autonomy. Beneficence is a principle allowing physicians to act responsibly in their practice and in the best interests of their patients, which may involve overlooking autonomy.[55] However, the gap between a patient and a physician has led to problems because in other cases, the patients have complained of not being adequately informed.
The seven elements of informed consent (as defined by Beauchamp and Childress) include threshold elements (competence and voluntariness), information elements (disclosure, recommendation, and understanding) and consent elements (decision and authorization).[56] Some philosophers such as Harry Frankfurt consider Beauchamp and Childress criteria insufficient. They claim that an action can only be considered autonomous if it involves the exercise of the capacity to form higher-order values about desires when acting intentionally.[57] What this means is that patients may understand their situation and choices but would not be autonomous unless the patient is able to form value judgements about their reasons for choosing treatment options they would not be acting autonomously.
In certain unique circumstances, government may have the right to temporarily override the right to bodily integrity in order to preserve the life and well-being of the person. Such action can be described using the principle of "supported autonomy",[44] a concept that was developed to describe unique situations in mental health (examples include the forced feeding of a person dying from the eating disorderanorexia nervosa, or the temporary treatment of a person living with a psychotic disorder with antipsychotic medication). While controversial, the principle of supported autonomy aligns with the role of government to protect the life and liberty of its citizens. Terrence F. Ackerman has highlighted problems with these situations, he claims that by undertaking this course of action physician or governments run the risk of misinterpreting a conflict of values as a constraining effect of illness on a patient's autonomy.[58]
Since the 1960s, there have been attempts to increase patient autonomy including the requirement that physician's take bioethics courses during their time in medical school.[59] Despite large-scale commitment to promoting patient autonomy, public mistrust of medicine in developed countries has remained.[60]Onora O'Neill has ascribed this lack of trust to medical institutions and professionals introducing measures that benefit themselves, not the patient. O'Neill claims that this focus on autonomy promotion has been at the expense of issues like distribution of healthcare resources and public health.
One proposal to increase patient autonomy is through the use of support staff. The use of support staff including medical assistants, physician assistants, nurse practitioners, nurses, and other staff that can promote patient interests and better patient care.[61] Nurses especially can learn about patient beliefs and values in order to increase informed consent and possibly persuade the patient through logic and reason to entertain a certain treatment plan.[62][63] This would promote both autonomy and beneficence, while keeping the physician's integrity intact. Furthermore, Humphreys asserts that nurses should have professional autonomy within their scope of practice (35–37). Humphreys argues that if nurses exercise their professional autonomy more, then there will be an increase in patient autonomy (35–37).
After the Second World War, there was a push for international human rights that came in many waves. Autonomy as a basic human right started the building block in the beginning of these layers alongside liberty.[64] The Universal declarations of Human rights of 1948 has made mention of autonomy or the legal protected right to individual self-determination in article 22.[65]
Documents such as the United Nations Declaration on the Rights of Indigenous Peoples reconfirm international law in the aspect of human rights because those laws were already there, but it is also responsible for making sure that the laws highlighted when it comes to autonomy, cultural and integrity; and land rights are made within an indigenous context by taking special attention to their historical and contemporary events[66]
The United Nations Declaration on the Rights of Indigenous Peoples article 3 also through international law provides Human rights for Indigenous individuals by giving them a right to self-determination, meaning they have all the liberties to choose their political status, and are capable to go and improve their economic, social, and cultural statuses in society, by developing it. Another example of this, is article 4 of the same document which gives them autonomous rights when it comes to their internal or local affairs and how they can fund themselves in order to be able to self govern themselves.[67]
Minorities in countries are also protected as well by international law
This page is a redirect. The following categories are used to track and monitor this redirect:
From a page move: This is a redirect from a page that has been moved (renamed). This page was kept as a redirect to avoid breaking links, both internal and external, that may have been made to the old page name.
When appropriate, protection levels are automatically sensed, described and categorized.
Organisms respond to the changes brought by nightfall, including darkness, increased humidity, and lower temperatures. Their responses include direct reactions and adjustments to circadian rhythms, governed by an internal biological clock. These circadian rhythms, regulated by exposure to light and darkness, affect an organism's behavior and physiology. Animals more active at night are called nocturnal and have adaptations for low light, including different forms of night vision and the heightening of other senses. Diurnal animals are active during the day and sleep at night; mammals, birds, and some others dream while asleep. Fungi respond directly to nightfall and increase their biomass. With some exceptions, fungi do not rely on a biological clock. Plants store energy produced through photosynthesis as starch granules to consume at night. Algae engage in a similar process, and cyanobacteria transition from photosynthesis to nitrogen fixation after sunset. In arid environments like deserts, plants evolved to be more active at night, with many gathering carbon dioxide overnight for daytime photosynthesis. Night-blooming cacti rely on nocturnal pollinators such as bats and moths for reproduction. Light pollution disrupts the patterns in ecosystems and is especially harmful to night-flying insects.
Historically, night has been a time of increased danger and insecurity. Many daytime social controls dissipated after sunset. Theft, fights, murders, taboo sexual activities, and accidental deaths all became more frequent due in part to reduced visibility. Cultures have personified night through deities associated with some or all of these aspects of nighttime. The folklore of many cultures contains "creatures of the night," including werewolves, witches, ghosts, and goblins, reflecting societal fears and anxieties. The introduction of artificial lighting extended daytime activities. Major European cities hung lanterns housing candles and oil lamps in the 1600s. Nineteenth-century gas and electric lights created unprecedented illumination. The range of socially acceptable leisure activities expanded, and various industries introduced a night shift. Nightlife, encompassing bars, nightclubs, and cultural venues, has become a significant part of urban culture, contributing to social and political movements.
A planet's rotation causes nighttime and daytime. When a place on Earth is pointed away from the Sun, that location experiences night. The Sun appears to set in the West and rise in the East due to Earth's rotation.[1] Many celestial bodies, including the other planets in the solar system, have a form of night.[1][2]
The length of night on Earth varies depending on the time of year. Longer nights occur in winter, with the winter solstice being the longest.[3] Nights are shorter in the summer, with the summer solstice being the shortest.[3] Earth orbits the Sun on an axis tilted 23.44 degrees.[4] Nights are longer when a hemisphere is tilted away from the Sun and shorter when a hemisphere is tilted toward the Sun.[5] As a result, the longest night of the year for the Northern Hemisphere will be the shortest night of the year for the Southern Hemisphere.[5]
Night's duration varies least near the equator. The difference between the shortest and longest night increases approaching the poles.[6] At the equator, night lasts roughly 12 hours throughout the year.[7]The tropics have little difference in the length of day and night.[6] At the 45th parallel, the longest winter night is roughly twice as long as the shortest summer night.[8] Within the polar circles, night will last the full 24 hours of the winter solstice.[5] The length of this polar night increases closer to the poles. Utqiagvik, Alaska, the northernmost point in the United States, experiences 65 days of polar night.[9] At the pole itself, polar night lasts 179 days from September to March.[9]
Over a year, there is more daytime than nighttime because of the Sun's size and atmospheric refraction. The Sun is not a single point.[10] Viewed from Earth, the Sun ranges in angular diameter from 31 to 33 arcminutes.[11] When the center of the Sun falls level with the western horizon, half of the Sun will still be visible during sunset. Likewise, by the time the center of the Sun rises to the eastern horizon, half of the Sun will already be visible during sunrise.[12] This shortens night by about 3 minutes in temperate zones.[13] Atmospheric refraction is a larger factor.[10] Refraction bends sunlight over the horizon.[13] On Earth, the Sun remains briefly visible after it has geometrically fallen below the horizon.[13] This shortens night by about 6 minutes.[13] Scattered, diffuse sunlight remains in the sky after sunset and into twilight.[14]
There are multiple ways to define twilight, the gradual transition to and from darkness when the Sun is below the horizon.[15] "Civil" twilight occurs when the Sun is between 0 to 6 degrees below the horizon. Nearby planets like Venus and bright stars like Sirius are visible during this period.[16] "Nautical" twilight continues until the Sun is 12 degrees below the horizon.[17] During nautical twilight, the horizon is visible enough for navigation.[18] "Astronomical" twilight continues until the Sun has sunk 18 degrees below the horizon.[16][19] Beyond 18 degrees, refracted sunlight is no longer visible.[19] The period when the sun is 18 or more degrees below either horizon is called astronomical night.[17]
Similar to the duration of night itself, the duration of twilight varies according to latitude.[19] At the equator, day quickly transitions to night, while the transition can take weeks near the poles.[19] The duration of twilight is longest at the summer solstice and shortest near the equinoxes.[20]Moonlight, starlight, airglow, and light pollution create the skyglow that dimly illuminates nighttime.[21][22] The amount of skyglow increases each year due to artificial lighting.[21]
Night exists on the other planets and moons in the solar system.[1][2] The length of night is affected by the rotation period and orbital period of the celestial object.[23] The lunar phases visible from Earth result from nightfall on the Moon.[24] The Moon has longer nights than Earth, lasting about two weeks.[23] This is half of the synodic lunar month, the time it takes the Moon to cycle through its phases.[25] The Moon is tidally locked to Earth; it rotates so that one side of the Moon always faces the Earth.[26] The side of the Moon facing away from Earth is called the far side of the Moon and the side facing Earth is called the near side of the Moon. During lunar night on the near side, Earth is 50 times brighter than a full moon.[27] Because the Moon has no atmosphere, there is an abrupt transition from day to night without twilight.[28]
Night varies from planet to planet within the Solar System. Mars's dusty atmosphere causes a lengthy twilight period. The refracted light ranges from purple to blue, often resulting in glowing noctilucent clouds.[29] Venus and Mercury have long nights because of their slow rotational periods.[30] The planet Venus rotates once every 243 Earth days.[31] Because of its unusual retrograde rotation, nights last 116.75 Earth days.[32] The dense greenhouse atmosphere on Venus keeps its surface hot enough to melt lead throughout the night.[33][34] Its planetary wind system, driven by solar heat, reverses direction from day to night. Venus's winds flow from the equator to the poles on the day side and from the poles to the equator on the night side.[35][36] On Mercury, the planet closest to the Sun, the temperature drops over 1,000 °F (538 °C) after nightfall.[37]
The day-night cycle is one consideration for planetary habitability or the possibility of extraterrestrial life on distant exoplanets.[38] Some exoplanets, like those of TRAPPIST-1, are tidally locked. Tidally locked planets have equal rotation and orbital periods, so one side experiences constant day, and the other side constant night. In these situations, astrophysicists believe that life would most likely develop in the twilight zone between the day and night hemispheres.[39][40]
Living organisms react directly to the darkness of night.[42] Light and darkness also affect circadian rhythms, the physical and mental changes that occur in a 24-hour cycle.[43] This daily cycle is regulated by an internal "biological clock" that is adjusted by exposure to light.[43] The length and timing of nighttime depend on location and time of year.[44] Organisms that are more active at night possess adaptations to the night's dimmer light, increased humidity, and lower temperatures.[45]
Animals that are active primarily at night are called nocturnal and usually possess adaptations for night vision.[46] In vertebrates' eyes, two types of photoreceptor cells sense light.[47]Cone cells sense color but are ineffective in low light; rod cells sense only brightness but remain effective in very dim light.[48] The eyes of nocturnal animals have a greater percentage of rod cells.[47] In most mammals, rod cells contain densely packed DNA near the edge of the nucleus. For nocturnal mammals, this is reversed with the densely packed DNA in the center of the nucleus, which reduces the scattering of light.[49] Some nocturnal animals have a mirror, the tapetum lucidum, behind the retina. This doubles the amount of light their eyes can process.[50]
The compound eyes of insects can see at even lower levels of light. For example, the elephant hawk moth can see in color, including ultraviolet, with only starlight.[46] Nocturnal insects navigate using moonlight, lunar phases, infrared vision, the position of the stars, and the Earth's magnetic field.[51] Artificial lighting disrupts the biorhythms of many animals.[52] Night-flying insects that use the moon for navigation are especially vulnerable to disorientation from increasing levels of artificial lighting.[53] Artificial lights attract many night-flying insects that die from exhaustion and nocturnal predators.[54] Decreases in insect populations disrupt the overall ecosystem because their larvae are a key food source for smaller fish.[55]Dark-sky advocate Paul Bogard described the unnatural migration of night-flying insects from the unlit Nevada desert into Las Vegas as "like sparkling confetti floating in the beam's white column".[56]
Some nocturnal animals have developed other senses to compensate for limited light. Many snakes have a pit organ that senses infrared light and enables them to detect heat. Nocturnal mice possess a vomeronasal organ that enhances their sense of smell. Bats heavily depend on echolocation.[57] Echolocation allows an animal to navigate with their sense of hearing by emitting sounds and listening for the time it takes them to bounce back.[57] Bats emit a steady stream of clicks while hunting insects and home in on prey as thin as human hair.[58]
People and other diurnal animals sleep primarily at night.[59] Humans, other mammals, and birds experience multiple stages of sleep visible via electroencephalography.[60] The stages of sleep are wakefulness, three stages of non-rapid eye movement sleep (NREM) including deep sleep, and rapid eye movement (REM) sleep.[61] During REM sleep, dreams are more frequent and complex.[62] Studies show that some reptiles may also experience REM sleep.[63] During deep sleep, memories are consolidated into long-term memory.[64] Invertebrates most likely experience a form of sleep as well. Studies on bees, which have complex but unrelated brain structures, have shown improvements in memory after sleep, similar to mammals.[65]
Compared to waking life, dreams are sparse with limited sensory detail. Dreams are hallucinatory or bizarre, and they often have a narrative structure.[66] Many hypotheses exist to explain the function of dreams without a definitive answer.[66]Nightmares are dreams that cause distress. The word "night-mare" originally referred to nocturnal demons that were believed to assail sleeping dreamers, like the incubus (male) or succubus (female).[67] It was believed that the demons could sit upon a dreamer's chest to suffocate a victim, as depicted in John Henry Fuseli's The Nightmare.[67]
Fungi can sense the presence and absence of light, and the nightly changes of most fungi growth and biological processes are direct responses to either darkness or falling temperatures.[44] By night, fungi are more engaged in synthesizing cellular components and increasing their biomass.[68] For example, fungi that preys on insects will infect the central nervous system of their prey, allowing the fungi to control the actions of the dying insect. During the late afternoon, the fungi will pilot their prey to higher elevation where wind currents can carry its spores further. The fungi will kill and digest the insect as night falls, extending fruiting bodies from the host's exoskeleton.[69] Few species of fungi have true circadian rhythms.[44] A notable exception is Neurospora crassa, a bread mold, widely used to study biorhythms.[70]
During the day, plants engage in photosynthesis and release oxygen. By night, plants engage in respiration, consuming oxygen and releasing carbon dioxide.[71] Plants can draw up more water after sunset, which facilitates new leaf growth.[72] As plants cannot create energy through photosynthesis after sunset, they use energy stored in the plant, typically as starch granules.[73] Plants use this stored energy at a steady rate, depleting their reserves almost right at dawn.[73] Plants will adjust their rate of consumption to match the expected time until sunrise. This avoids prematurely running out of starch reserves,[73] and it allows the plant to adjust for longer nights in the winter.[74] If a plant is subjected to artificially early darkness, it will ration its energy consumption to last until dawn.[74]
Succulent plants, including cacti, have adapted to the limited water availability in arid environments like deserts.[75] The stomata of cacti do not open until night.[76] When the temperature drops, the pores open to allow the cacti to store carbon dioxide for photosynthesis the next day, a process known as crassulacean acid metabolism (CAM).[76][77] Cacti and most night-blooming plants use CAM to store up to
A cartoon is a type of visual art that is typically drawn, frequently animated, in an unrealistic or semi-realistic style. The specific meaning has evolved, but the modern usage usually refers to either: an image or series of images intended for satire, caricature, or humor; or a motion picture that relies on a sequence of illustrations for its animation. Someone who creates cartoons in the first sense is called a cartoonist,[1] and in the second sense they are usually called an animator.
The concept originated in the Middle Ages, and first described a preparatory drawing for a piece of art, such as a painting, fresco, tapestry, or stained glass window. In the 19th century, beginning in Punch magazine in 1843, cartoon came to refer – ironically at first – to humorous artworks in magazines and newspapers. Then it also was used for political cartoons and comic strips. When the medium developed, in the early 20th century, it began to refer to animated films that resembled print cartoons.[2]
A cartoon (from Italian: cartone and Dutch: karton—words describing strong, heavy paper or pasteboard) is a full-size drawing made on sturdy paper as a design or modello for a painting, stained glass, or tapestry. Cartoons were typically used in the production of frescoes, to accurately link the component parts of the composition when painted on damp plaster over a series of days (giornate).[3] In media such as stained tapestry or stained glass, the cartoon was handed over by the artist to the skilled craftsmen who produced the final work.
Such cartoons often have pinpricks along the outlines of the design so that a bag of soot patted or "pounced" over a cartoon, held against the wall, would leave black dots on the plaster ("pouncing"). Cartoons by painters, such as the Raphael Cartoons in London, Francisco Goya's tapestry cartoons, and examples by Leonardo da Vinci, are highly prized in their own right. Tapestry cartoons, usually colored, could be placed behind the loom, where the weaver would replicate the design. As tapestries are worked from behind, a mirror could be placed behind the loom to allow the weaver to see their work; in such cases the cartoon was placed behind the weaver.[2][4]
In print media, a cartoon is a drawing or series of drawings, usually humorous in intent. This usage dates from 1843, when Punch magazine applied the term to satirical drawings in its pages,[5] particularly sketches by John Leech.[6] The first of these parodied the preparatory cartoons for grand historical frescoes in the then-new Palace of Westminster in London.[7]
Editorial cartoons are found almost exclusively in news publications and news websites. Although they also employ humor, they are more serious in tone, commonly using irony or satire. The art usually acts as a visual metaphor to illustrate a point of view on current social or political topics. Editorial cartoons often include speech balloons and sometimes use multiple panels. Editorial cartoonists of note include Herblock, David Low, Jeff MacNelly, Mike Peters, and Gerald Scarfe.[2]
Comic strips, also known as cartoon strips in the United Kingdom, are found daily in newspapers worldwide, and are usually a short series of cartoon illustrations in sequence. In the United States, they are not commonly called "cartoons" themselves, but rather "comics" or "funnies". Nonetheless, the creators of comic strips—as well as comic books and graphic novels—are usually referred to as "cartoonists". Although humor is the most prevalent subject matter, adventure and drama are also represented in this medium. Some noteworthy cartoonists of humorous comic strips are Scott Adams, Charles Schulz, E. C. Segar, Mort Walker and Bill Watterson.[2]
Political cartoons are like illustrated editorials that serve visual commentaries on political events. They offer subtle criticism which are cleverly quoted with humour and satire to the extent that the criticized does not get embittered.
The pictorial satire of William Hogarth is regarded as a precursor to the development of political cartoons in 18th century England.[11]George Townshend produced some of the first overtly political cartoons and caricatures in the 1750s.[11][12] The medium began to develop in the latter part of the 18th century under the direction of its great exponents, James Gillray and Thomas Rowlandson, both from London. Gillray explored the use of the medium for lampooning and caricature, and has been referred to as the father of the political cartoon.[13] By calling the king, prime ministers and generals to account for their behaviour, many of Gillray's satires were directed against George III, depicting him as a pretentious buffoon, while the bulk of his work was dedicated to ridiculing the ambitions of revolutionary France and Napoleon.[13]George Cruikshank became the leading cartoonist in the period following Gillray, from 1815 until the 1840s. His career was renowned for his social caricatures of English life for popular publications.
By the mid 19th century, major political newspapers in many other countries featured cartoons commenting on the politics of the day. Thomas Nast, in New York City, showed how realistic German drawing techniques could redefine American cartooning.[14] His 160 cartoons relentlessly pursued the criminal characteristic of the Tweed machine in New York City, and helped bring it down. Indeed, Tweed was arrested in Spain when police identified him from Nast's cartoons.[15] In Britain, Sir John Tenniel was the toast of London.[16] In France under the July Monarchy, Honoré Daumier took up the new genre of political and social caricature, most famously lampooning the rotund King Louis Philippe.
Political cartoons can be humorous or satirical, sometimes with piercing effect. The target of the humor may complain, but can seldom fight back. Lawsuits have been very rare; the first successful lawsuit against a cartoonist in over a century in Britain came in 1921, when J. H. Thomas, the leader of the National Union of Railwaymen (NUR), initiated libel proceedings against the magazine of the British Communist Party. Thomas claimed defamation in the form of cartoons and words depicting the events of "Black Friday", when he allegedly betrayed the locked-out Miners' Federation. To Thomas, the framing of his image by the far left threatened to grievously degrade his character in the popular imagination. Soviet-inspired communism was a new element in European politics, and cartoonists unrestrained by tradition tested the boundaries of libel law. Thomas won the lawsuit and restored his reputation.[17]
Cartoons such as xkcd have also found their place in the world of science, mathematics, and technology. For example, the cartoon Wonderlab looked at daily life in the chemistry lab. In the U.S., one well-known cartoonist for these fields is Sidney Harris. Many of Gary Larson's cartoons have a scientific flavor.
The first comic-strip cartoons were of a humorous tone.[18] Notable early humor comics include the Swiss comic-strip book Mr. Vieux Bois (1837), the British strip Ally Sloper (first appearing in 1867) and the American strip Yellow Kid (first appearing in 1895).
In the United States in the 1930s, books with cartoons were magazine-format "American comic books" with original material, or occasionally reprints of newspaper comic strips.[19]
In Britain in the 1930s, adventure comic magazines became quite popular, especially those published by DC Thomson; the publisher sent observers around the country to talk to boys and learn what they wanted to read about. The story line in magazines, comic books and cinema that most appealed to boys was the glamorous heroism of British soldiers fighting wars that were exciting and just.[20] DC Thomson issued the first The Dandy Comic in December 1937. It had a revolutionary design that broke away from the usual children's comics that were published broadsheet in size and not very colourful. Thomson capitalized on its success with a similar product The Beano in 1938.[21]
On some occasions, new gag cartoons have been created for book publication.
Because of the stylistic similarities between comic strips and early animated films, cartoon came to refer to animation, and the word cartoon is currently used in reference to both animated cartoons and gag cartoons.[22] While animation designates any style of illustrated images seen in rapid succession to give the impression of movement, the word "cartoon" is most often used as a descriptor for television programs and short films aimed at children, possibly featuring anthropomorphized animals,[23]superheroes, the adventures of child protagonists or related themes.
In the 1980s, cartoon was shortened to toon, referring to characters in animated productions. This term was popularized in 1988 by the combined live-action/animated film Who Framed Roger Rabbit, followed in 1990 by the animated TV series Tiny Toon Adventures.
^Samuel S. Hyde, "'Please, Sir, he called me "Jimmy!' Political Cartooning before the Law: 'Black Friday', J.H. Thomas, and the Communist Libel Trial of 1921", Contemporary British History (2011) 25(4), pp. 521–550.