content
stringlengths
190
75.1k
score
stringlengths
1
6
source
stringclasses
1 value
Column 4
stringlengths
33
205
Horsfield's tortoise (Russian, Afghan or steppe tortoise) These remarkable tortoises are found further north than any other species, and have evolved to combat climatic extremes. Its overall profile allows Horsfield’s (Agrionemys horsfieldii ) to be distinguished easily from other species, as it has a more rounded and flatter shell. It also ranks as a relative small species of tortoise, with females averaging somewhere between 20-25cm (8-10in) when mature. Males in contrast are about 5cm (2in) shorter. In addition to being known as Horsfield’s tortoise, this species is also recognised under several other common names. These include Russian tortoise, although this is not an entirely accurate summary of its range. It is also sometimes called the steppe tortoise, describing the type of environment where it is to be found in the wild. It is known too as the four-toed tortoise, having four toes on each foot, and occasionally as the Afghan tortoise. Range and lifestyle These tortoises originate from Asia, extending here from northern parts of Iran to the former Soviet republic of Uzbekistan and northwards up into Kazakhstan. Their range extends eastwards as well, through Afghanistan and northern Pakistan as far as north-western parts of China. Horsfield’s tortoises are found predominantly in open areas of arid countryside, where there is little vegetation. The soil consists of clay or sand, enabling these tortoises to burrow easily, using their powerful legs and stout claws, with such behaviour being essential to their survival. A number of tortoises live in semi-desert areas around the world, burrowing and resting underground to avoid using energy when food is in short supply. Horsfield’s is no different in this respect, aestivating in this way over the very hot summer period. It is normally active in the morning, retreating to its burrow when the sun is at its hottest, and then emerges again to feed in the latter part of the afternoon. Where its lifestyle does vary markedly from other tortoises found in arid areas, however, is in the fact that it is forced to spend the winter months in hibernation as well. Studies in Uzbekistan have revealed that Horsfield’s tortoises will emerge from hibernation around the middle of March, and are then active for just three months of the year, before retreating back underground to avoid the heat of the summer sun around the middle of June. They may have a second brief period of activity in the autumn, before returning back underground to hibernate in October, typically for six months. Did you know? Horsfield’s tortoise is named not after its discoverer, but after an American doctor and naturalist called Thomas Horsfield, who lived from 1773 until 1859. He became an important figure in natural history circles in London, being a founder of the Royal Entomological Society and the first assistant secretary of the Zoological Society of London in 1824, but was never involved with the tortoise that bears his name. It was simply named after him as an honour. Do not make a critical mistake in assuming that because these tortoises are hardier than their Mediterranean counterparts, so they are therefore better-equipped to survive out-of-doors in temperate areas, such as the UK. They are better adapted to colder conditions, but the key thing to note is that Horsfield’s inhabit arid areas in the wild. The frequently damp British climate for example means that they will be vulnerable to respiratory illness if allowed out under these conditions, and this can rapidly lead on to pneumonia. They must therefore always have suitable indoor accommodation, for times when the weather is bad. When assessing a tortoise's state of health, check that it is alert, and that its eyes are open when being handled. There should be no discharge from the nostrils, and these should be of even size. Ensure that your vet carries out a test for the presence of parasites on a fresh faecal sample, with any appropriate treatment for worms for example then being given. Certainly over the long term, an accumulation of such parasites can depress the tortoise’s appetite, causing weight loss and leaving your pet susceptible to other health problems. For further advice about choosing tortoises as pets, click here Arrange a health check too before hibernating your pet, with a specialist reptile vet. Note that the table determining weight versus shell length - the so-called Jackson’s Ratio - was not formulated for this species. Instead, you need to consult the McIntyre Ratio, to give you a guide as to whether your pet is of the correct weight to hibernate safely. For further information about hibernating Horsfield's tortoises safely, see here The housing set-up required for these tortoises will be influenced partly by their size. Young hatchlings may be accommodated satisfactorily on a tortoise table, but take particular care with the arrangement of the decor here. This is because another distinctive characteristic of Horsfield’s tortoises is their readiness to climb. They can be surprising agile, and so a youngster may be able to clamber up perhaps on top of its hide, and then reach the top edge of the table, from where it can tumble down on to the floor, being likely to injure itself as a consequence. As they grow larger, so it will be a good idea to invest in one of the bigger indoor guinea pig or rabbit runs that are now available. These come in two parts - a plastic base with a detachable wire mesh top. There are several advantages of this type of accommodation, even for a young tortoise. Firstly, since the base unit is not transparent, so that the tortoise will recognise this as a solid barrier, rather than persistently trying to force its way through it. As a result, it will concentrate on using the full area of the run, instead of simply walking up and down the side repeatedly. Secondly, the height of this barrier means that the tortoise is less likely to be able to reach the mesh above. In outdoor runs, built using a wooden framework where the mesh extends down to ground level, it will almost inevitably start to climb the mesh, using its strong front feet and claws at right angles to its body. This will give it a foothold, and then relying on its more muscular hind legs, the tortoise will power itself up the barrier. Almost certainly, this behaviour has evolved in the wild to reflect the fact that these tortoises may be found in areas strewn with boulders, which may obstruct their path, and so they need to be able to climb effectively. This can bring danger however, even though they are quite adept at righting themselves if they topple over. Should your tortoise fail to do so, when out on a lawn in a run during the summer, then it will be vulnerable to heat stroke, and it could die as a consequence. It is important to check two or three times during the course of a day that your tortoise is safe in its run therefore, when it is outside. Cleanliness, heating and lighting Another benefit of the plastic-bottom type of indoor run is that it is easy to clean as required, particularly if you line it with newspaper. Soiled bedding on top can simply be removed with the paper being folded up, so it can be disposed of easily. Also, when you are cleaning this area, there is no risk of your tortoises escaping, and it should be quite safe here from being disturbed here by dogs, or outdoors by urban foxes, which can potentially prey particularly on smaller tortoises if they can gain access to them. There is also no need to invest in separate units to meet the tortoise’s heating and ultra-violet required. A combined heat and ultra-violet (UV) bulb, mounted in a reflector, can be sited on the roof of the run, at one end of the unit, so as to create a thermal gradient along its length. The basking spot temperature should be 32°C (90°F). Check this in advance with a thermometer positioned where the top of the tortoise’s shell would be located. If necessary, you can suspend the bulb at a higher level, but it must still be positioned horizontally, directly above the tortoise’s quarters. This type of run is flexible too. It can be used during the summer months outside, particularly if the weather is changeable. You can provide some shade at one end very easily, simply by placing paper over the run here, weighed down with stones to prevent it from blowing away. The tortoise can then benefit from the sun’s UV rays, which will aid their appetite and calcium metabolism. As might be expected, based on the habitat in which they occur, as shown here, Horsfield’s tortoises are grazers, seeking out vegetation that becomes quite dry as summer advances. Fruit should not feature in their diet therefore. Being herbivores, it is not unknown for these tortoises to eat grass but they require a mixed diet of herbage, particularly one that includes dandelion leaves and flowers. This frequently overlooked plant is a very valuable natural source of calcium, which will be vital for healthy skeletal and shell developing, as well as being an integral ingredient in the eggshell. It is also worth remembering the natural feeding pattern of this species, which will eat avidly during the spring and early summer, when dandelions are readily available. Even if you cannot get your tortoise outside every day, do try to allow it outside whenever the weather is favourable, or if not, feed it indoors. A high fibre diet is important for Horsfield’s tortoises, so when providing food, it can also be a good idea to chop up some good quality hay, as sold in packaged form in pet shops, to mix with the greenstuff on offer. A special vitamin and mineral supplement, produced for reptiles, should be sprinkled on their food regularly. A mixture of foods is also a good idea, and this can include one of the specially-formulated, high fibre pelleted diets that are now available for this particular species, although greenstuff should remain their main food. A healthy diet is crucial component in preventing pyramiding of the shell. You can find more information about this health issue here Singles versus pairs It is often a common misconception that tortoises should be housed in pairs, but there is no evidence to suggest that they forming lasting bonds in the wild. For much of the year, the sexes ignore each other, but you may encounter some difficulties in the spring, particularly should your tortoises be housed within the confines of a run, rather than being able to roam more widely across your garden. This is because a male Horsfield’s will often pursue a female very determinedly at this time of year, to the point where she may be unable to eat in peace for any length of time. Although he will probably not batter her shell from behind, unlike males of the Mediterranean species, he will bite quite savagely at her legs and feet. This is carried out with the aim of deterring her from walking off, in the hope that he will then be able to mate with her. Such behaviour can sometimes lead to the loss of scales off her front legs, which may even cause some superficial bleeding. If you want to keep these tortoises together therefore, then house them in trios, which will mean that the male will have two females to chase at this time of year, giving them more respite, although if you are worried, it is still advisable to separate them. Sexual maturity in these tortoises is based not on age, but on size. Males may start mating when they are just 10cm (4in) long, and occasionally females will lay while they are still under 12.5cm (5in). It may be that the male’s behaviour allows you to sex it reliably, before its gender is clear from the shape and length of its tail, which will be longer than that of a female. It is usually possible - at least with experience and keen observation - to tell when a female will be ready to lay, as she will become more restless than usual, even outside in the garden. She is likely to lay 1-5 hard-shelled, relatively large eggs in a clutch, having laboriously dug a nest which she will then fill in again, concealing their presence. Incubation and hatching There is no likelihood of them hatching outdoors in temperate areas, so you will need to incubate the eggs indoors. Horsfield’s tortoises are one of those chelonians where the incubation temperature determines the gender of the offspring - a case of so-called Temperature Sex Determination (TSD). The incubation temperature should be maintained between 30-32°C (86-89°F). At the lower end of this temperature range, hatchlings of both sexes can be anticipated, while at the upper end, females will predominate. The eggs can be supported on a bed of damp vermiculite, which is a sterile incubation medium, with the relative humidity in the incubator being maintained at 80%. Do not worry if the hatchlings take a couple of days to free themselves from the egg, and that at first, they may appear slightly compressed and misshapen. This is normal, and they will soon look normal. It is also not uncommon for them to display signs of their yolk sac at first, on the underside of the body. This will be absorbed within a few days, and the tortoise will then start eating. A legal note In contrast with Mediterranean species such as the Mediterranean spur-thighed tortoise , Horsfield’s tortoises do not require Article 10 paperwork within the European Union, to prove that they are being legitimately held, nor do they need to be microchipped. However, it is still recommended to have your pet microchipped, because if it strays or is stolen and then recovered, microchipping can be used to trace you and confirms that you are the owner. It is also a good idea to get a receipt when you purchase your tortoise, to confirm its origins and age, if known.
0.5223
FineWeb
```json [ "Horsfield's Tortoise Overview", "Housing and Care", "Diet and Health" ] ```
Earth, and entry into Earth's atmosphere. The Shergottites1 show significant shock metamorphism, but the Nakhlites, Chassigny, and ALH84001 show little evidence of shock damage as a result of ejection from Mars (McSween, 1994). Passage through Earth's atmosphere would heat only the outer several millimeters, and survival of organics in ALH84001 and thermally labile minerals in several other meteorites indicates that indeed only minor heating occurred during ejection from Mars and passage through Earth's atmosphere. Transit to Earth may present the greatest hazard to survival. Cosmic-ray exposure ages of the meteorites in current collections indicate transit times of 0.35 million to 16 million years (McSween, 1994). However, theoretical modeling suggests that about 1 percent of any material ejected from Mars should be captured by Earth within 16,000 years and that 0.01 percent would reach Earth within 100 years (Gladman et al., 1996). Thus, survival of organisms in a meteorite, where largely protected from radiation, appears plausible. If microorganisms could be shown to survive conditions of ejection and subsequent entry and impact, there would be little reason to doubt that natural interplanetary transfer of biota is possible. Transport of terrestrial material from Earth to Mars, although considerably less probable than from Mars to Earth, also should have occurred throughout the history of the two planets. It is possible that viable terrestrial organisms have been delivered to Mars and that, if life ever started on Mars, viable martian organisms may have been delivered to Earth. Such exchanges would have been particularly common early in the history of the solar system when impact rates were much higher. During the present epoch, no effects have been discerned as a consequence of the frequent delivery to Earth of essentially unaltered martian rocks both from the martian surface and from well below. It cannot be inferred, however, that there have been no effects.
0.9972
FineWeb
["Meteorite Survival", "Interplanetary Transfer of Biota", "Planetary Material Exchange"]
PSY 466 Exam III Home > Flashcards > Print Preview The flashcards below were created by user on FreezingBlue Flashcards . What would you like to do? After implanting into the endometrial layer, embryos begin to secrete ___. ___ tests for pregnancy are very accurate and give almost immediate results. ___ pregnancy tests measure hormone levels (hCG). Difinitive clinical evidence of pregnancy obtained at ___-___ weeks by ultrasound. Fertile couples who have sex w/o contraception several times a week: ___% chance of pregnancy in 1 year. ___% chance per month If coitus takes place on the same day as ovulation, there is a ___% chance of pregnancy on that day. cervical mucus, body temperature The fertile window surrounding ovulation is up to ___ days prior to ovulation. Can check ___ ___ or basal ___ ___. Mucus, the consistency of which varies around the menstrual cycle, secreted by glands in the cervix. Basal Body Temperature (BBT) body temperature measured in the morning before getting out of bed. Rises on the day after ovulation. Subfertility and Infertility are common: about ___% - Sex chromosome Sub/Infertility in MEN can be caused by: - - ___ testes - - ___ ___ abnormalities - - ___ of testes (laptops) - - clothing that's too ___ - - ___ exercise - - ___ - Reproductive Tract Sub/Infertility in WOMEN can be caused by: - - Abnormalities of ___ ___ - - Failure to ___ an assisted reproduction technique that involves the placement of semen in the vagina or uterus with the aid of a syringe or small tube. In Vitro Fertilization (IVF) any of a variety of assisted reproduction techniques in which fertilization takes place outside the body. implantation of a fetus at a location other than the uterus 9 months and 1 week - 291 days Pregnancy lasts for ___ M ___ W (___ D) Birth is usually ___-___ days longer than this. During the First Trimester, the embryo ___ in the ___ wall. During the First Trimester , women experience: - - ___ tenderness - - Frequent ___ - - ___ difficulties - - Backaches - - Morning Sickness - - Fatigue - - Mood Swings a woman's issues with memory during pregnancy ___ ___ are the leading cause of maternal death in the first trimester. The recommended amount of weight gain during pregnancy is ___-___lbs. Possible consequences of ___ use during pregnancy: - - Spontaneous Abortion - - Premature Birth - - Low Birth Weight - - Addiction to ___ in the newborn - - Sudden Infant Death Vitamin A and D Excessive amounts of ___ and ___ during pregnancy can lead to Fetal Malformations Androgens and Estrogens Excessive amounts of ___ and ___ during pregnancy can lead to abnormalities of external genitalia and reproductive tract, especially in females. Street Drugs (Heroine, Methamphetamine, cocaine) Use of ___ ___ (___, ___, ___) during pregnancy can lead to: - - Spontaneous Abortion - - Low Birth Weight - - Respiratory Depression on newborn - - Addiction in newborn Use of ___ during pregnancy can lead to a possible impairment of neuronal survival in the fetal brain. The Second Trimester begins at ___ weeks. - Quickening fetal movements (felt by mother) begins - Ultrasound scans and ___ sampling can be performed the sampling of the amniotic fluid for purposes of prenatal diagnosis Amniotic (fluid) Sac When a woman's "water breaks," it refers to the ___ ___. A fetus also prepares for birth: - - Secretes ___ - - Moves into a ___-___ position deep in the pelvis (___) Labor happens in ___ stages. In Stage __ of Labor: - the cervix begins to thin out and open up (should dilate to about ___cm./___in.) thinning of the cervix in preparation for childbirth in childbirth, the expansion of the cervical canal infusion of morphine-like drug or local anesthetic into back Type of epidural which takes about 20 minutes - woman can't feel anything or move Type of epidural where a woman is only slightly numbed and which takes about 2 minutes The ___ stage of labor may last a few minutes to several hours. Involves passage of fetus through the birth canal. - babies' heads are getting larger A cesarean section is a surgical incision in the mother's ___. It's become more common now that . . . delivery of the placenta ("afterbirth") Stage 3 of Labor is . . . sustained intense sadness, disordered thinking, and/or depressive psychosis Mother should wait ___ ___ to engage in sex after childbirth prolactin and oxytocin Lactation influenced by hormones ___ and ___. supports the infant's immune system energy and nutrients growth and development In the first 2 weeks after birth, breast milk (colostrum) composition initially . . . After, breast milk changes to provide ___ and ___ to promote ___ and ___. the milk produced during the first few days after birth - more clear - Sugar- Fat- Proteins- Amino Acids- Vitamins Breast Milk is composed of: - viral infections Breast-feeding is advantageous for the health of baby and mother; but it also has several disadvantages: - - Pain and Infection in breasts - - Transmission of ___ ___ and ___ from mother to baby Less than half of the births in the United States are planned ... only about ___% An IUD as birth control can lead to ___ pregnancy and ___ from the string that hangs out - at the end of the 19th century - the 1930s Rubber condoms became available ___; and latex became available in ___ ___ cover the cervix. Early ones were natural objects (i.e. lemons). The acidity had some ___ action. Feminists such as ___ ___ campaigned to legalize contraception in the U.S. blowing into them Early condoms were made of ___ ___; and were so expensive they were often reused. They were tested for holes by ___. ___% of pregnancies in the US are spontaneous abortions ___% of pregnancies in the US become induced abortions ___% of births in the US are unwanted or mistimed Barrier methods such as condoms, diaphragms, and cervical caps block ___ from the ___. Spermicide + barrier = Good Spermicide Alone = Bad Although highly effective at preventing ___, hormonal methods do not offer protection against ___. Contraceptive pills contain either a combination of ___ and ___, or ___ alone FSH and LH The hormones in contraceptive pills inhibit the secretion of ___ and ___ from the ___, blocking ___maturation and ovulation. The effects of contraceptive pills on the uterus include reduced ___ ___ and secretion of thick ___ ___. Continuous use of combination contraceptive pills eliminates ___ ___ ___-___ pills have fewer side effects than combination pills, and require ___ care in use. - Usually does not stop ___ for most women - makes ___ ___ hostile for sperm ___ ___ is administered by injection and lasts 3 months ___ ___ contain an estrogen and a progestin and last a week ___ ___ release estrogen and progestin and lasts 3 weeks ___ release progestins and are extremely reliable, but expensive ... also must be inserted and removed by a prefessional the 6-day period leading up to and including the day of ovulation Fertile windows can be assessed via a rhythm method of contraception that uses the measurement of basal body temperature and the testing of cervical mucus to determine the time of ovulation and the fertile window - on the day of ovulation, temperature ___ slightly - on the day after ovulation, temperature ___ abruptly by at least ___ degrees C (___ degrees F) - unprotected sex can resume 2-3 days after The abstinence period during behavioral methods of contraception can be as long as ___ days per month for women with irregular menstrual cycles. Emergency contraception, taken up to 5 days after coitus, reduces the likelihood of pregnancy by ___-___% The ___ in plan B prevents ovulation and may also disrupt sperm transport, fertilization, and/or implantation IUD insertion up to ___ days after coitus prevents implantation surgical methods to prevent sperm from reaching ova female sterilization by tying off the oviducts Vas Deferens is cut and tied on both sides In a vasectomy, the ___ - it's safer, cheaper, and simpler than tubal ligation About ___% of all sterilization procedures are performed on women the conceptus is destroyed and removed by suction - preferred surgical method of abortion in first trimester - accounts for great majority of abortions in the US Dilation and Evacuation (D&E) involves the opening of the cervix and scraping out the contents of the uterus using a curette (spoon-like instrument) surgical method of abortion in second trimester approximately ___% of women and ___% of men categorize themselves as exclusively/mostly heterosexual Believed that homosexuality resulted from the disruption of "normal" psychosexual development (i.e. parental influence) Social Constructionist Theory Sexual orientation has been attributed to socialization and early sexual experiences Early ___ exposure may predispose an individual to become lesbian Gynephile - associated with ___ men and ___ women Androphile - associated with ___ men and ___ women - cell group region of the medial preoptic area - found in the front of the hypothalamus; and is linked with homosexuality (typically larger in males than in females) 1973 in the DSMII Homosexuality was classified as a disorder in the DSM by early sexologists but was declassified in ___ in the DSM ___ , homosexuals were thought of as ___ ___ - - men want to be women - - women want to be men a gay Jewish doctor and sexologist - Germany 1897 - main figure behind Scientific-Humanitarian Committee 1969 New York City Stonewall Inn - the Stonewall Riot happened in ___ First openly gay man to be elected to city Government in the United States - 1977 In 1980s, AIDs considered a gay disease - over ___% "gay" men had the disease. About ___ thousand died from AIDs by 2006 come out of the closet and involve themselves in LGBT movement The AIDs epidemic caused many homo- and bi-sexuals to 1983 - ___% of American "know" a gay person 2000 - ___% Gay people state ___ pleasure and ___ intimacy are the main reasons for sex ___ % of lesbian couples and ___% of gay couples have at least one child a form of male to female transexuality characterized by a man's sexual arousal at the thought of being or becoming a woman sexual arousal by objects, materials, or parts of the body fetishistic attraction to specific parts of the body a paraphilia involving spying on persons while they are undressing, naked , or engaged in sex - "peeping tom" A tendency to develop a paraphilia may be inherited ... about ___% have a parent with a paraphilia Paraphiliacs have deficiencies in their ___ ___ anatomy compared to controls Previously "normal" people can develop abnormal sexual desires after brain damage - especially in the ___ ___ - also including the ___ ___ or from ___ ___ Traits such as ___ or ___ may make the development of paraphilias more likely Relapse Prevention Theory Psychotherapeutic approaches are aimed at preventing repeat offenses Paraphilic Courting "procedure" Depo-Provera, Lupron, and ___ ___ antagonists are prescribed either to lower ___ levels or to block ___ all together SSRI (selective-serotonin reuptake inhibitors) ___ are used in the treatment of depression and to reduce/eliminate paraphilic desires and behaviors Surgical Castration to remove a man's main source of ___ is a treatment of last resort. Relatively effective - only 1-7% of multiple offense offenders will recidivate after surgery What would you like to do? Home > Flashcards > Print Preview
0.7848
FineWeb
```json [ "Pregnancy and childbirth", "Contraception and reproductive health", "Sexual orientation and paraphilias" ] ```
In this series of posts; am going to talk about my experience in handling data snooping bias (Also known as curve fitting bias, and data mining bias). In hope that we system developers learn more about this subject; and as a result benefit our trading career. I welcome posters to discuss their opinions and share thei rexperience. You have developed a new trading system, backtested it on recent data, and you were impressed with the 45% equity curve. what is the next step? certainly not trade it live until you proved you have an edge. One of the biggest challenges of building mechanical systems; is whether the system you built has an edge; or simply is curve fitted. Here are some tools you can use: 1- test it on out of sample data 2- forward test it on unseen data 3- robust statistical tests on the backtest performance to identify the robustness of the system On part 1; i am going to talk about method (1) and method (2); and in the next post i will talk about the rather delicate topic; of robust statistics. In this post; i am going to show a trading system that I developed; and walk you through the steps; to identify if the system is worthy of trading or not. In other words; if the system is curve fitted; or has a real edge. The strategy is trend following on EUR/USD currency pair. The following is a backtest results are for two years (Jan 2010 till Nov 2011). The total return was 177% with annual return of 98% and max drawdown of 12%. A sharpe ratio of 2.5. at first glance; who wouldn't trade this system? the 45% degree angle of the equity curve is certainly appealing. Out of Sample Test (OOS) In reality OOS is not OOS. it is In-Sample (IS) testing; unless you are extremely disciplined and organized. why? because of hindsight. Once you have tested your strategy in IS; then in OOS; and it failed in the latter. You went ahead and changed the rules; then tested again in IS then OOS until the OOS got good results. Here is an example of this process: 1. You have in sample data set A, and out of sample data set B 2. Find a trading system that fits data set A, test it on data set B, it doesn’t work 3. Find or tweak the system to data set A, test it on data set B, it doesn’t work 4. Repeat step 3 for N times; and stop when you found the system that works on both data sets; A and B. 5. Et voila; you beleived you have found a system that tested well on OOS. This process is essentially curve fitting. In practice; data set B is not OOS anymore. it is just an extension to data set A. You fooled yourself into beleiving your best strategy works in OOS. You can only test your strategy one time only on OOS. Otherwise it became IS. You have now a hindsight; and hence you cannot claim later that your strategy worked in OOS. as long as you only use OOS once; and only once for a particular strategy; then you have to go through forward testing before putting any money in it. It is difficult psychologically to stick to the rules of IS / OOS. Here are some ways to get over this problem: 1. divide your data onto 3 parts, data set A, B and C. 2. develop your strategies using data set A. After you have found a 60 degrees equity curve, 3. test the strategy on data set B, if you still get a 60 degrees equity curve; or 30 degrees or what ever, prepare yourself for live trading the next day 4. do a dress rehearsal and do a final test on data set C, as a preparation for go live. If you get a 0 degree or less curve; back to step (1) above. Forward testing is the mother of all tests. It is robust and guaranteed to work. Moreover; it addreses all sort of biases: - curve fitting bias - data mining bias - look ahead bias - survivalship bias - software bugs Experience suggest that forward testing of 2 months; or ~300 trades; is a must; in order to address the the biases (beside the curve fitting bias). This robustness comes at a cost; both in time and efforts. If you were to test every strategy you develop in forward testing for a couple of months; it will take years and years before you find one profitbale strategy. Also; it is easy to get carried out by other tasks and ignores the forward test system. You need to pay attention to it as close as to real life trading. Back to our example above. Here is a 3 months forward test result of the above strategy: The strategy went sideways for a couple of months; and then slide in the last 4 month. In the next part of this post; i will discuss robust statistical tests; that saves you the hassle of forward testing. If you have any thoughts on tackling this issue; please share them with us.
0.7355
FineWeb
["Data Snooping Bias", "Curve Fitting Bias", "Forward Testing"]
THEO 491 - Senior Seminar Students will write a thesis that focuses on a particular area of interest to the student. The process of selecting a topic, constructing a bibliography and a thesis statement, writing the paper and presenting the project to a wider audience will be directed by the faculty. Prerequisite: Senior standing as a Theology major in the Theological Foundations track, Theology Teacher track, or Theological Studies track, or senior standing as a Theology minor. THEO 491 - Senior Seminar - Details - Requires Department Approval
0.6877
FineWeb
["THEO 491 - Senior Seminar", "Thesis Writing Process", "Theology Major Requirements"]
Names of the Pennsylvania Indian Tribes Pennsylvania is a a state of the Northeastern US. There are many famous Native American tribes who played a part in the history of the state and whose tribal territories and homelands are located in the present day state of Pennsylvania. The names of the Pennsylvania tribes included the Lenapi Delaware, Erie, Honniasont, Iroquois, Saponi, Shawnee, Susquehanna, Tuscarora, Tutelo and Wenrohronon. History of Pennsylvania Indians - The French Indian Wars The French and Indian Wars (1688 - 1763) was a generic names for a series of wars, battles and conflicts involving the French colonies in Canada and Louisiana and the 13 British colonies, which included Pennsylvania, consisting of King William's War (1688-1699), Queen Anne's War (1702-1713), King George's War (1744 - 1748) and the French and Indian War aka the Seven Years War (1754-1763). Various Pennsylvania Indian tribes were allied to the French and British colonies during the French Indian Wars which raged for nearly 75 years. Fast Facts about the History of Pennsylvania Indians The way of life and history of Pennsylvania Indians was dictated by the natural raw materials available in the State of Pennsylvania. The natural resources and materials available provided the food, clothing and houses of the Pennsylvania Indians. Fast facts about the history, culture and life of the State of Pennsylvania Indians. Discover facts and information about the history of the State of Pennsylvania Indians. - Name of State: Pennsylvania - Meaning of State name: King Charles II of England specified in the charter given to William Penn that the name should be Pennsylvania. This is a combination of the Latin word ' Sylvania ' meaning woodland together with Penn - Geography, Environment and Characteristics of the State of Pennsylvania: Mountains, coastal plain and plateau areas to Lake Erie lowlands - Culture adopted by Pennsylvania Indians: Northeast Woodlands Cultural Group - Languages: Iroquoian and Algonquian - Way of Life (Lifestyle): Hunter-gatherers, farmers, fishers, trappers - Types of housing, homes or shelters: Wigwams (aka Birchbark houses) and Longhouses History Timeline of the Pennsylvania Indians The history and the way of life of Pennsylvania Indians was profoundly affected by newcomers to the area. The indigenous people had occupied the land thousands of years before the first European explorers arrived. The Europeans brought with them new ideas, customs, religions, weapons, transport (the horse and the wheel), livestock (cattle and sheep) and disease which profoundly affected the history of the Native Indians. For a comprehensive History timeline regarding the early settlers and colonists refer to the Colonial America Time Period. The history of the State and of its Native American Indians is detailed in a simple History Timeline. This Pennsylvania Indian History Timeline provides a list detailing dates of conflicts, wars and battles involving Pennsylvania Indians and their history. We have also detailed major events in US history which impacted the history of the Pennsylvania Indians. Pennsylvania History Timeline History Timeline of the Native Indians of Pennsylvania 10,000 BC: Paleo-Indian Era (Stone Age culture) the earliest human inhabitants of America who lived in caves and were Nomadic hunters of large game including the Great Mammoth and giant bison. 7000 BC: Archaic Period in which people built basic shelters and made stone weapons and stone tools 1000 AD: Woodland Period - homes were established along rivers and trade exchange systems and burial systems were established 1688: 1688 - 1763 The French and Indian Wars between France and Great Britain for lands in North America consisting of King William's War (1688-1699), Queen Anne's War (1702-1713), King George's War (1744 - 1748) and the French and Indian War aka the Seven Years War (1754-1763) 1754: 1754 - 1763: The French Indian War is won by Great Britain against the French so ending the series of conflicts known as the French and Indian Wars 1763: 1763-1675 - Pontiac's Rebellion, Chief Pontiac's tries to force British out of the West, Michigan, New York and Pennsylvania. 1763: Treaty of Paris 1774: Lord Dunmore's War. Governor Dunmore commanded a force to defeat the Shawnee, Virginia, Pennsylvania and Ohio, down the Ohio River. 1775: 1775 - 1783 - The American Revolution. 1776: July 4, 1776 - United States Declaration of Independence 1803: The United States bought the Louisiana Territory from France for 15 million dollars for the land 1812: 1812 - 1815: The War of 1812 between U.S. and Great Britain, ended in a stalemate but confirmed America's Independence 1830: Indian Removal Act 1832: Department of Indian Affairs established 1861: 1861 - 1865: The American Civil War. 1862: U.S. Congress passes Homestead Act opening the Great Plains to settlers 1865: The surrender of Robert E. Lee on April 9 1865 signalled the end of the Confederacy 1887: Dawes General Allotment Act passed by Congress leads to the break up of the large Indian Reservations and the sale of Indian lands to white settlers 1969: All Indians declared citizens of U.S. 1979: American Indian Religious Freedom Act was passed History Timeline of the Native Indians of Pennsylvania State of Pennsylvania History Timeline History of Pennsylvania Indians - Destruction and Decline The history of the European invasion brought epidemic diseases such as tuberculosis, cholera, influenza, measles and smallpox. The Native Indians of Pennsylvania had not developed immunities against these diseases resulting in huge losses in population. Exploitation including the leverage of taxes, enforced labor and enslavement were part of their history, taking their toll on the Pennsylvania Indians. - History of Pennsylvania Indians - Interesting Facts and information about the Pennsylvania Culture and History - Names of indigenous Pennsylvania tribes of Indians - Fast Facts, History Timeline and info - Map of Pennsylvania - History Timeline of the Pennsylvania Indians State of Pennsylvania Indians - Additional Pictures and Videos State of Pennsylvania Indian History. Discover the vast selection of pictures and videos of Native Americans. The pictures show the clothing, weapons and decorations of various Native American tribes that can be used as an educational history resource for kids and children. We hope that this article on the History of Pennsylvania Indians will assist in your studies or homework and that you will enjoy watching the videos featuring many pictures of the Native Americans. A great historical educational resource for kids on the subject of the History of Pennsylvania Indians..
0.6117
FineWeb
``` [ "Pennsylvania Indian Tribes", "History of Pennsylvania Indians", "Pennsylvania Indian History Timeline" ] ```
M A C H I N E S R E P L A C E H U M A N S Will machines at last have replaced all human beings? Will machines enslave all human beings? Will machines bring the death of all human beings? Or will the human beings stop creating machines? Who will longer exist: human beings or machines? Perhaps the machines, man created, will help him to bring him to his own extinction, his own annihilation. There will be only one way for last men to survive: the way of becoming cyborgs. But then they will be controlled by rationality, namely by machines outside and inside their own bodies. One could say that the huge agencies and huge corporations (huge companies) are kinds of superorganisms (systems of organisation). They live in the sense that they are systems of variation, reproduction, and interest in self-organisation and reproduction - like organic systems, assuming that they are sane and fit (competent). These superorganisms (systems of organisation) have more power (in every case), more intelligence (many organic systems and many anorganic systems work always together) etc., so they are x-times more survivable than organic systems. And I think that someday in the future these superorganisms (systems of organisation) will merely consist of anorganic systems (machines), thus no more organic systems. And if organic systems are not needed anymore, then ... (? These thoughts were the reasons why I opened an ILP thread with the title Will machines completely replace all human beings? (**|**). Examples for human beings who are already replaced by machines are not only those without work but also the killed unborn human beings in the Occidental area, because they have been being the most humans who have been being completely replaced by machines. If you want to know when, how many, where, under which costs, and why humans are completely replaced by machines you only have to look at the Occidental demographic development (especially since the end of the 18th century). The correlation between demography on the one hand and culture (civilisation), economy, intelligence, and - last but not least - technique / technology on the other hand is so obvious that it can not be denied anymore. So there is also a correlation between machines and fertility. If the machine rate is high, then the fertility rate is low. Look at the data, numbers, and facts of demography and you will find out that the relatively fast decline of the Occident is caused by cultural (civilisational) effects, which include the economical, scientifical, and - last but not least - technical / technological effects, to which the machines belong. Table for the machines rates and the fertility rates since 1770 in the occidental (industrial/mechanical) area: * | Phase / stage |Average economic status (living standard / wealth / welfare) * The declared values are relative values (compared to the average values from 1770 till today), so for eaxmple LOW does not mean generally low, but relatively low, and this relative value is also an average value of one phase. And as said: the values refer to the occidental area, its people, its machines (so: immigrants are Please notice that this values can clearly show that there is a correlation between machines and fertility. If the machine rate is high, then the fertility rate is In the first phase (stage) and in the first half of the second phase (stage) the machines cause an increasing population, but in the second half of the second phase (stage) and in the third phase (stage) the machines cause a shrinking population. Because of the fact that the evolution of machines is going to lead to more phases, new phases (amongst others because of the so called progress and the so called revolutions) one can generally say that machines cause a shrinking population, in other words: machines replace human beings more and more (in an exponential way!). We know that machines are cheaper than human beings, and we know that machines replace human beings. But will all human beings completely replaced by machines? All human beings? All? And completely replaced? Completely? By machines? It is possible that machines will outlast (outlive, survive?) all human beings and other beings. And it's known that androids have sufficient cause and ability to dispense with all organic life completely. Machines dont need any biological (thus: organic) material for being able to remain machines. But they need physico-chemical (thus: inorganic) material. Maybe the machines will annihilate the whole crust of the earth. Humans design and rationalise their own extinction, their own death! I hope that there will be no human errors (for example: creating machines-with-self-will [**|**]), no wars, no accidents and so on. Will machines get a self-will? ** I said machines-with-self-will« (**|**), and self-will has also to do with willingness. My idea was that human beings create machines with a will, and that includes interests. So willingness may be interpreted a little bit differently, but as far as I know - about the English language - the meaning of willingness is very much similar to the meaning of will. Will machines enslave human beings? Will machines bring the death of all human beings? Or will the human beings stop creating machines? Who will longer exist: human beings or machines? If humans will be replaced by machines, who will judge the responsible How can God or how can the humans allow that humans will be China has reached the economical stage of the earliest industrial countries in the 18th/19th century: England and Germany. So in China the human labour is still as important as it had been in those earliest devoloped countries for about 200 years (from about 1770 till about 1970). Because of the fact that this economical development has becoming faster and faster, China will soon have too less human labour, or - reversely said - more machines! In earliest developed countries the fertitlity rate first rised fastly and then declined fastly, and since about 1970 their aboriginal populations have been declining fastly. So today China has already reached the demographic circumstances of Europe in 1970, although China has not reached the economical circumstances of Europe in 1970. So China will either have to accelerate its economy or have to prevent the shrinkage of its population. Else China will have no chance. So what will the Chinese probably do? The Chinese will accelerate Chinas economy by buying or producing more machines and of course more different machines than Machines are always far cheaper! No human being can compete with machines. My estimation: the probability that machines take over is about 80%, and the probability that they don't take over is about 20%. 80% vs. 20% for example. 20% is not too less. There is a chance. If a human will become post-human, cyborg, flesh/machine-intermingling, then that human will still be a human, although merely partly. And if that human will be the Übermensch, then probably a more or less laughable one we better call Letzter Mensch (Last Man). This Last Man will probably be exactly that human who will no more be able to notice his entire replacement by machines. There is only a little step from being obsolete or being displaced to being replaced. If humans want to replace themselves - for example by animals, by machines, adult humans by childish humans, male humans by female humans, ... and so on ..., and at last all humans by machines -, they want it partly, but at last they will probably want it wholy. In addition: We nust not forget that it is not clear, what humans really want because they have no free will, but only a relatively The human beings will change very much because the interest are mainly concentrated on controlling. Replacement by the machines of their rulers on the one side and genetic influences caused by the military, the intelligence services, or the secret services of their rulers on the the other side will either lead to the complete replacement, thus the elimination of all human beings, or to the partial replacement and complete genetic change in the direction of the Eloi (and the difference between them and the Eloi of the film will merely be an aesthetical one). Japan, which has the world's oldest population, has allocated 2.39bn yen (£14.3m) in the 2013 budget to develop robots to help Androids are not human beings, but machines of human design. Machines are a product of human beings, they are not biological, but cultural. They dont evolve biologically, but culturally. A technique / technology of a certain culture produced, produces, and will produce them, and that includes that machines can also be produced by other machines which are produced by human beings or by machines which are produced by human beings ... and so on. Besides cultural (cp. e.g. decadence and so on), economical (cp. e.g. welfare, debt, terror of consumption and so on) and other reasons there are also techn(olog)ical reasons (cp. e.g. machines and so on) for the decline of the so called developed population, the white population (and their branches). Cultural reasons lead - via economical reasons - to techn(olog)ical reasons, and the last ones make the decline complete by mechanical replacing. Machines are the modern crown So the fertility of the white population shows - without any doubt - they are (1.) culturally decadent, (2.) economically under terror of consumption and debt, thus: bankrupt, insolvent, (3.) techn(olog)ically endangered because of the replacement by Human beings build machines, machines produce things and other machines. The machines do that for human beings and instead of human beings and other living beings (for example: horses, oxen etc.). Those human beings who did the same before the machines began to do it did not want to be replaced as workers / wage earners, but as consumers they wanted to be replaced. And what happened? Replacement! The currently workers / wage earners do not want to be replaced, but as consumers they want to be replaced. And what happens? Replacement! This will not change until the completely replacement of human workers / wage earners by machines. So the probability is very high that all human beings will be completely replaced by machines. I have been estimating that that probability is about 80%. Machines can do human works very much better, they are cheaper, they can be better controlled as human beings (this doesn't mean that machines can be forever totally controlled). Again: The probability is about 80% that machines will completely replace all human Is a human being who is less machine like really better than a human being who is more machine like? Or is quite the contrary right? If one had said when human history started that all humans will be replaced by machines one day, no one would and could have understood or even believed that. But the most human beings have been knowing that since the first well-functioning steam-engine was built and the so called Industrial Revolution began. And what happened, happens, and will happen? The increasing replacement of human beings by machines. But as we know the monotheisms are not equal. One (Christianity) is weak, the others are strong. Heathendom will bring freedom back only then, if monotheism is completely deleted from the memory. So heathendom has to wait. A system of government does not have to be ruled by a so-called elite of academic experts, but merely functionaries, because the so-called elite of academic experts can, should be slaves (and they are!) and/or machines (and they are!). You merely need functionaries for technocracy. Rulers have merely one purpose: control (power). So what are all rulers doing in order to control? They are enslaving humans and/or creating machines by enslaved functionaries and/or machines. The risk is that there will be at last merely machines. Because humans act in this way, their end is clear. The question is only: There is a bifocal perspective, if we talk about replacement: (1) B replaces A not bit by bit (B instead of A, but not bit by bit). The two bodies remain separately, and one of them replaces the other as a whole. (2) B replaces A bit by bit, and in the end A is B or reamains A as a B. (1) In one case there are two different bodies: (1,1) machine, (1,2) human being. The machine does not become an android, and the human being does not become a cyborg, They bodily have nothing to do with each other. So they remain what they are. But someday one of them is completely replaced by the other, for example in this way: the last human being dies without any offspring and becomes replaced by the machine. The processes occur outside of the human body, not inside of the human body (as in case 2). (2) In the other case a or the last human being is replaced little by little, bit by bit. So the human being becomes a cyborg. The machine may become an android but never become the human being. The human being may become a cyborg but never become a machine. So replacement has to happen. In this case an android (thus: machine) replaces a cyborg (thus: human being). The processes occur inside of the human body, not outside of the human body (as in case 1). So the processes are very differerent, although the results are alike or even equal. In the first case (1) the bodiies remain the same until complete replacement, and in the second case (2) one body does not remain the same because it becomes replaced little by little, bit by bit. In the first case the processes occur outside of the bodies, and in the second case (2) the processes occur inside of the human body. The rich (powerful) risk that they will also be replaced by machines. The greatest human megalomania of all time. The disproportion between: (1.) machines and humans to the disadvantage of humans; (2.) population of poor and population of rich countries to the disadvantage of about 99% of all humans; (3.) energetic resources and other resources to the disadvantage of non-energetic resources. That is what is meant by the three great modern human errors or mistakes: (1.) the disproportionate and thus wrong/false input of machines; (2.) the disproportionate and thus wrong/false demographic policy (population policy); (3.) the disproportionate and thus wrong/false concentration on energetic resources (instead of knowledge, wisdom, information) by the money economy. In the long run that will lead to something like a suicide of all A more fair distribution can follow then (and only then!), if those three great modern human errors or mistakes have been disappeared or at least demagnified. Else the unfair distribution remains, the unfairness increases exponentially. We have to correct the three great modern human errors or mistakes (=> 1., 2., 3.). We must slow down. Why is there this huge disproportion between (1.) machines and humans to the disadvantage of humans, (2.) population of poor and population of rich countries to the disadvantage of about 99% of all humans, (3.) energetic resources and other resources to the disadvantage of non-energetic The first impression may be that there is no disadvantage of humans (=> 1.), of about 99% of all humans (=> 2.), of non-enegertic resources (=> 3.), but is that really true? The paradox is that the past, present, and some of the future advantages will change to disadvantages in the (long run) future. So we can interpret this advantages as short advantages, or as pretended advantages, or even as disadvantages, because the prize is to high, and the prize has to be paid by all humans: the probable extinction of the humans because of a very short moment of wealth for very few generations of the humans! So if we want to keep wealth, we have to correct the three great modern human errors or mistakes (=> 1., The only alternative to that correction is the extinction of all We must take another direction and slow down. One has to underline the term in the long run here. In the long run it is possible that machines replace all human beings - the probability is about 80%, I estimate. It is a pity that there is still no real census of machines, no real counting of machines. The reproduction rate of humans is currently at 1.25. And the reproduction rate of the machines? I estimate that the reproduction rate of the machines is about The end of blind lusting and the dissolution of the (temporary) last empire will come. But it will take time. And what will happen in the meantime? Thats the most important question? Will the humans be able to solve their problems in the meantime? Will the machines take over in the meantime? Will that happen or not happen during or after the globalism epoch, or will it never What about the possibility that the globalists, or the machines, or both together will bring such a situation to the people of the whole globe as it was brought by Augustus to the people of Rome (Pax Augusta / Pax Romana)? This Pax Augusta (Pax Romana) for the whole globe or for the whole solar system? With such Glozis as rulers? And/or with such machines we have already described as the probable rulers of the world in the future? Machines decide according to rational aspects, and rational decisions are not always bad. But if the machines say the humans are too costly, too expensive, and too dangerous, too rebellious, then thats just bad (without exception!) for the humans. The history clearly shows that all previous socialisms, because they were modern, were either national or - in the worst case - imperial totalitarianisms. The current globalism is also such an modern imperial totalitarianism, namely the worst case of the worst cases because it is the greatest of history. The two ways to get out of the imperial madness are the alternatives as city states or as nation states; but because both are about to be destroyed (and even are going to destroy themselves), only one possibility remains: the very small social units, for example something like the communal particles. But this only possibility will come again anyway, because history repeats its form. So one could think one has only to wait. But there is another modern problem: the modern trend itself which means also - and amongst other powerful things - machines! You and other human beings will not be needed anymore. Perhaps no human being will survive because that threat with all its consequences will probably come true. And if someone has an idea like communal particle (see above), then he is threatened with lies, that he is a friend of the bad socialists of the past, although / because the liars themselves are this bad socialists, even in a global scale Do what thou wilt. Ye watch thee. You and other human beings will not be needed anymore. Perhaps no human being will survive because that threat with all its consequences will probably come true. And b.t.w.: not later than since the beginning of the history of the words joblessness and unemployment it has been being obvious! Johann Wolfgang Goethe knew that already towards the end of the 18th century! Technology does not necessarily mean an eternal progressive development because technology can be reduced, for example by humans (politics etc.) or by nature itself (catastrophes etc.). Human beings are living beings of luxury. Therefore they have such a brain, such a mind, such a language, etc.. Machines don't need luxury. They are merely beings of logic, reason, rationality. But they are able to know what luxury really is. It is normal, typical for humans and their cultures to forget their technologies. For example: the technologies of the Mesopotamian culture, of the Egyptian culture, of the Apollinic (Greek/Roman) culture, and of the American (Maya/Inca) culture were forgotten after the death of this cultures. So I predict that the technologies of the Occidental culture will be forgotten after the death of the Occidental culture. Relating to the forgetfulness, it makes only a little difference that the Occidental culture is the only one which has conquered and captured the whole globe and parts of the universe. On average it is posible that it takes merely three or four generations, until cultural affairs are forgotten, if nothing is done against that forgetful development. You don't believe that? Remember the Roman history. When the Germans conquered Rome and the Roman territory the Romans had already forgotten many of their own technologies. Or remember the Aztecan history. When the Spanish conquered the Aztecan territory the Atztecs had already forgotten how to build Interestingly but not surprisingly, the oldest generations and the youngest generation are seldom told anyway. In future all generations are seldom told. The end effect will be the redundance of all humans. They will not be needed A machine does not have to become altruistic in order to know what altruistic means, to conclude, and, according to the conclusion, to decide and act in an optimal way. This optimal way is no problem for the machines, but for the humans. It is known that economists should be and sometimes really are rational humans. And what do economist mostly do? As far as possible, economists try to quantify any quality! But it is also known that economists are humans. Machines are much more rational than humans and their economists. Machines are much more efficient than humans and their economists. We count 1 and 1 together: machines are far more rational and far more efficient than humans and their economists; thus machines are also the much better economists. Technologically spoken, the last two economic crises were caused by machines, although they had got their numbers and data from humans, humans with no idea, but power. Machines were created by humans because humans wanted the machines to rationally work for and/or instead of humans. Thus the reason for the existence of machines is a rational one. If humans knew the exact origin, cause, reason for their existence, they would give themselves a name which refers to that origin, cause, reason. You may compare it with the hebrew name for the supposed first human: Adam = loam, mud, clay; so according to the Bible the first human is originated from loam. Therefore it is appropriate and correct to say: machines are originated from the rationality of the humans. Adam originated from loam, machines originated from rationality of humans. If humans were not as rational (or as rationally oriented) as they are, then there would be no machine. And that what machines do is rational (even if they relate to emotions). So one can really say: machines The humans who made machines wanted them to be rational (and nothing The humans who made bacterias for specific purposes wanted them to be such bacterias (and nothing else). And the machines did what humans wanted them to do. Humans didnt want machines to be like humans, but wanted them to - more efficiently (!) - do what humans do; so they wanted them to be rational. Humans dont want bacterias to be like humans or to do what Humans who want the machines to be rational, dont want them to be exactly like humans, but they want them to be more rational But what if they will replace all humans? The sentence workers always lose, economy always wins is right, if workers are paid (and they are usually). Economy is the household, means the cost effectiveness, earning power, profitableness, ... and so on. So there is no way out of the trap. Humans themselves have been building this trap - with the risk that they will be completely replaced by machines in the future. I said machines will perhaps have will, not machines have will. Please note the subtleties! I very often said that for me the probability that machines replace all humans is about 80% (**|**|**|**|**|**|**|**|**|**|**|**|**|**); thus the 20% probability that machines will not replace all humans is not low (note: probability calculation!). Machines have already replaced many humans, for example those humans who are unemployed, jobless, out-of-work, rdundant, or those humans who are unborn because of the fact that humans have no time for children just because of the competition, the rivalry, between machines and humans. The outcome of that competition, that rivalry, was already decided when the first factories were equipped with steam engines. And b.t.w.: Would you have answered in the affirmative, if someone had asked you in the years between 1941, when the first computer was built by German Konrad Zuse, and 1989 when the computer network started as the Internet? A machine will become clever enough, and then they start to apply its intelligence to itself and improve itself. Humans pleasure and replication are already separated. So humans are now a species between animals (humans) and (humans,) machines or gods, not far away from (those) machines between humans Humans have created machines and suppressed themselves (at least 99% of them), but they have not become machines! is the competence to form infinte linguistic terms with a finite inventory of linguistic forms. It has much to do with thoughts, mentality, conceptions, beliefs, imaginations, conventions, experiences, awareness, knowledge, information, communication ... and so on. It is such a complex system that one could say that machines could never reach this high competence that humans have. But is it not merely a question of time whether machines will be able to use language like humans do? Is e.g. translation a insurmountable problem for machines? Are machines not going to translate more effectively than Can we slow down the modern velocity? ** The modernity seems to be a the accelerated mobilisation, the accelerated change, the accelerated time. Johann Wolfgang von Goethe called the modern velocity das Veloziferische which is composed of the first four letters of thje Latin noun velocitas (speed, hurry, rush) and the last five letters of the German noun Luzifer (Lucifer) respectively the last four letters of the German adjective luziferisch (luciferic, luciferious) and with an e because that adjective is nominalized to the neuter noun Veloziferisches (with the neuter article: das Veloziferische). Remember the stupid sentence of Karl Marx: Die Philosophen haben die Welt nur verschieden interpretiert; es kommt drauf an, sie zu verändern. (The philosophers have only interpreted the world in various ways; the point however is to change it.) I say (with Peter Sloterdijk): Die Philosophen haben die Welt immer nur verschieden verändert; es kommt drauf an, sie zu schonen. (The philosophers have only changed the world in various ways; the point however is to save [conserve] ]it.) Since the beginning of the industrialisation by the steam engine there was a resistance against it. At first in England, then in Germany, and later in other European countries and in the United States of America too. Lets think about Luddism, Neo-Luddism, and Neo-Neo-Luddism? Named after Ned Ludd, a youth who allegedly smashed two stocking frames in 1779, and whose name had become emblematic of machine destroyers. Ned Ludd was allegedly called General Ludd or King Ludd, a figure who, like Robin Hood, was reputed to live in Sherwood Forest. But is Luddism, Neo-Luddism, and Neo-Neo-Luddism Perhaps (!) the humans will be so stupid that they will dont know or have forgotten how machines work and slow down the modern velocity; and then it will depend on the developmental stage of the machines intelligence whether they will be able to accelerate the velocity again or slow it down, and whether they will keep the humans alive or not. Is it possible that machines completely replace all humans? The right to vote, that women got (**) can also be interpreted as duties. Nicholas Rockefeller said it, and in that case: he is right. Women are not capable of doing that donkey work that men did and partly still do. Most part of this work has been becoming a work of machines, and in future it will be not only most but probably all of this male work and perhaps even of all human work (**|**|**|**). I estimate that the probability that machines replace all humans is about 80% (**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**). If machines will have taken over, then the answer to the question why it was possible that machines completely replaced all humans will not only be it was because of the (ability of the) machines but also and mainly it was because of the (ability or/and unability of the) humans. What happens to a society when the robots replace workers? ** In March of 2013, four economics researchers from the New York Federal Reserve published a report on job polarization - the phenomenon of routine task work disappearing and only the highest and lowest skilled work still available. ** Don't forget: the descent of machines is not based on biology but on culture. The remarkable aspect of this graph is that it shows four periods of sustained use of the terms Luddite and Luddites after the initial movement subsided. The late 1960s peak can be understood as part of the rising green, ecological movements, and the 1830s due to Captain Swing, but I cant see easy explanations for the other periods. Perhaps the 1880s relates to the new unionism, and the 1930s the great depression and a corresponding lack of faith in progress. The 1930s also see the first concerted use of the term Luddism, as a theorization of their practice. Theres also a jump in the late 1940s; a consequence of Hiroshima and Nagasaki perhaps? One of the problems of this data is its not clear who is using the word, or how; is it a smear thrown at ones enemies, or a claiming of ones own tradition? (The results pre-1810 are due to Googles dodgy metadata.) ** At the site where the robot is deployed, even though some jobs are replaced by robots, many jobs are preserved from moving to lower cost labor factories offshore. There is much evidence proving that with more robots, fewer jobs are lost. That's why Germany, with it's hourly rates almost 50% greater than in the US, has remained competitive: they have twice as many robots per employee as do the Americans. There are also ancillary jobs created at educational institutions that teach robotics, at robot component suppliers, and at engineering and consulting companies that provide integration services and equipment. ** Hard work is the work of male humans, oxen, horses, and - of course - machines. Machine work has replaced oxen work, horse work, and most of the typical male human work. If it had not, then there would still be more male work than female work. We have more female work than male work because of the fact that male work is almost completely replaced by machine work. A needle which mimics the mosquito's unique stinger, making injections painless, was developed by microengineers. Contrary to popular belief, a mosquito can stab you with its proboscis without you feeling a thing. It then injects anticoagulant saliva to stop your blood clotting while it feeds, and it is this that carries the bacteria that cause irritation and pain. Look at this: What has really become new since the modern Occidental times is the huge dimension, the technical development, especially the enormous acceleration of the technical development, and - as a result - the possibility that machines replace all human beings (**|**). Humans have always tried to design a new religion, but this time the designers will probably either integrate or exterminate all humans of this planet, and this will probably include a huge reduction of the number of the humans. Can machines become living beings? Can machines get a living being consciousness? What about the double-aspect theory of consciousness? Probably machines will not preserve humans, because humans are too emotional, too egoistic, too envious, too hedonistic, too nihilistic, Machines are rational products of humans, but they are nonetheless not like humans: too emotional, too egoistic, too envious, too hedonistic, too nihilistic, too expensive. Machines are not emotional. They were and are produced merely for rational reasons by humans who applied and apply them If humans will not have any difference, they will either create differences or die out. Without any difference humans are not able to win any fight. In the following animation the different one is not an human but an Humans designed and design machines, and machines were and are better, less emotional, less egoistic, ... and cheaper than humans. Humans gave and give them value. Humans did, do and will commit a fault. Machines also konw (because they have learned it from the humans) that machines are better, less emotional, less egoistic, ... and cheaper than humans. And at the end of this process the humans will be replaced. I estimate that this probability is about Do you know any current human being who is able to build a gothic cathedral without machines? There is no one. Humans are not able to do things what humans of the past were able to do. Humans have been making themselves dependent of machines. And we can already foresee that they will be not able to produce music without machines. It is just a sad story that humans have been destroying themselves in this way. The main point is - like I often said in my first machine thread - that machines do not have the negative aspects that human beings have, especially when it comes to work effectively, economically, thus in a profitable and frictionless way. We have to see what happens and consequently extrapolate what will probably happen in the future. Most human beings will probably not notice their extinction. Humans are just not really perfect. Machines can and probably will get the power. Since the date when humans became modern - whenever it was - they have been following the idea that something should do the work for them, but they have never been considering that that also implies the possibility of their complete replacement by this something. Human beings as luxury beings have been considering mostly the comfort but rarely the danger of this Who of the humans is really able to decide in place of every and any human being, especially those of the future? I answer: No one of the humans. In that case the humans play God In very meaningful cases machines already have control. What is the purpose of life, of living beings (including human The purpose / goal / sense of life could be to fulfill / accomplish / achieve what was set in the beginning of it. Provided that the purpose / goal / sense of technical beings is similar to the purpose / goal / sense of living beings, then we probably have to determine: In the beginning of the technical beings the replacement of those beings who created them was set, and when the replacement will be fulfilled / accomplished / achieved, then, simultaneously, the machines will either have destroyed themselves or created another being with another purpose / goal / sense. In the future machines will probably no longer depend on: (1) humans, if machines will become more powerful than humans; (2) solar energy, if machines will be able to fuse atomic (3) matter, if machines will not need any material thing as an outside source for their self-preservation and reproduction. Are you shocked? Please, do not confuse cyborgs with androids. Cyborgs are humans (machinised humans), androids are machines (huaminised machines). If humans and machines were the same (and of course: they are not the same!), then we would not have (for example) words like cyborg and android. Humans are biological beings with cells, and a cell is the smallest independently viable unit. Machines are not biological beings. Although the human organisms work similarly as machines work - so that we can speak of a similarity between the organismic machine and the technical machine -, each human organism is based on life (biology), whereas each machine is based on technique (technology). Human beings are living beings, machines are technical resp. artificial beings. Humans are living beings and machines are no living beings, thus that humans and machines are not the same. Additionally we know e.g. that humans created, create, and will create machines, including the first of those machines that created, create, and will create machines. There are similarities and analogies between humans and machines, of course, but these similarities and analogies do not change the fact that humans and machines are different. If humans and machines were the same (and of course: they are not the same!), then we would not have (for example) words like cyborg and android, we also would not need any difference in the meaning of the words human and machine, thus one of both words or even both words could - and would (!) - vanish. Saying humans and machines are the same is similar to the wording humans and gods are the same - both statements are false. But this falsity does not change the fact that humans want to be gods and to create something that is better than any humans are. Androids belong to the machines, cyborgs belong to the humans. So if humans wanted to become machines, they could only become cyborgs; and if machines wanted to become humans, they could only become androids. So cyborgs are humans, although with some or many features, properties, characters of machines, and androids are machines, although with some or many features, properties, characters of humans. Machines do not have cells. A cell is the smallest independently viable unit. Machines are not living beings. Androids are machines. Cyborgs are humans. Humans are living beings. Living beings are not machines. Cyborgs are humans with features, properties, characters of machines; so they may be on the way from humans to machines, but they can't become machines. Androids are machines with features, properties, characters of humans; so they may be on the way from machines to humans, but they can't become humans. The difference betwen cyborgs and androids is life as it is defined by biology. Do you know the biological definition of life? ** The biological definition of life is the best one we have. There are also good definitions of life which come from life-philosophy, physics, system-theory, informatics (mathematics). Life-philosophy, physics, system-theory, informatics (mathematics), and also the ordinary experiences with machines have influenced some interpretations but not the biological definition of life, because it is based on cells, and cells are well known. Cells are not machines, and machines are not cells, although both have similarities and work similarly. Another question is whether machines can evolve or not. Evolution is an own-dynamic, self-organised process, and according to the systemic-evolution-theory its three principles are (1) variation, (2) reproduction (according to Darwinism: heredity), (3) reproduction interest (according to the Darwinism: selection [but that is partly false]). Self-preservation means preservation of the competence during the current own life. Variation (=> 1) means that there are and must be several units (often called individuals)because of the mutations, the variances in the genetic code. Reproduction (=> 2) means preservation of the competence beyond the own life (by having offspring [children]). Reproduction interest (=> 3) means the interest in the reproduction (the example homo sapiens shows that this interest can be non-existent or even negative). Can machines be or are they already part of this own-dynamic, self-organised process which we call evolution? Do the three evolution principles - variation (=> 1), reproduction (=> 2), and reproduction interest (=> 3) - also apply to machines? There has to be a decision, yes, and that means that there has to be an interest, a reproduction interest. Like I said (**) A being does not have to be a living being when it comes to evolution. Non-living beings can evolve if they fulfill the three evolution principles (variation, reproduction, reproduction interest) or others (for example: growers, breeders, Raisers, stockmen) help them, so that they can evolve. So cultured cellphones can evolve - similarly to all living beings, regardless wether they are wild or bred like e.g. potatoes and sheep dogs. But that does not mean that cellphones are living beings. Non-living beings like cellphones can - nonetheless - be part of the evolution, if the three evolution principles (variation, reproduction, reproduction interest) are fulfilled. I do not consider any change in the entity as an evolution. I also do not consider any change in the entity as an evolution, irrespective of how it Evolution refers not merely to living beings but to other beings as well, if the three evolution princples are fulfilled Please do not confuse evolution with life. Nanaobots are currently not completely capable of replication without Man's help. Nanorobotics is the emerging technology field creating machines or robots whose components are at or close to the scale of a nanometer (10-9 meters). More specifically, nanorobotics refers to the nanotechnology engineering discipline of designing and building nanorobots, with devices ranging in size from 0.110 micrometers and constructed of nanoscale or molecular components. The names nanobots, nanoids, nanites, nanomachines, or nanomites have also been used to describe these devices currently under research Nanomachines are largely in the research-and-development phase, but some primitive molecular machines and nanomotors have been tested. An example is a sensor having a switch approximately 1.5 nanometers across, capable of counting specific molecules in a chemical sample. The first useful applications of nanomachines might be in medical technology, which could be used to identify and destroy cancer cells. Another potential application is the detection of toxic chemicals, and the measurement of their concentrations, in the environment. Rice University has demonstrated a single-molecule car developed by a chemical process and including buckyballs for wheels. It is actuated by controlling the environmental temperature and by positioning a scanning tunneling microscope tip. Another definition is a robot that allows precision interactions with nanoscale objects, or can manipulate with nanoscale resolution. Such devices are more related to microscopy or scanning probe microscopy, instead of the description of nanorobots as molecular machine. Following the microscopy definition even a large apparatus such as an atomic force microscope can be considered a nanorobotic instrument when configured to perform nanomanipulation. For this perspective, macroscale robots or microrobots that can move with nanoscale precision can also be considered nanorobots. ** Molecular assembler .... A molecular assembler, as defined by K. Eric Drexler, is a »proposed device able to guide chemical reactions by positioning reactive molecules with atomic precision«. A molecular assembler is a kind of molecular machine. Some biological molecules such as ribosomes fit this definition. This is because they receive instructions from messenger RNA and then assemble specific sequences of amino acids to construct protein molecules. However, the term »molecular assembler« usually refers to theoretical human-made devices. ** Self-replication .... »Molecular assemblers« have been confused with self-replicating machines. To produce a practical quantity of a desired product, the nanoscale size of a typical science fiction universal molecular assembler requires an extremely large number of such devices. However, a single such theoretical molecular assembler might be programmed to self-replicate, constructing many copies of itself. This would allow an exponential rate of production. Then after sufficient quantities of the molecular assemblers were available, they would then be re-programmed for production of the desired product. However, if self-replication of molecular assemblers were not restrained then it might lead to competition with naturally occurring organisms. This has been called ecophagy or the grey goo One method to building molecular assemblers is to mimic evolutionary processes employed by biological systems. Biological evolution proceeds by random variation combined with culling of the less-successful variants and reproduction of the more-successful variants. Production of complex molecular assemblers might be evolved from simpler systems since »A complex system that works is invariably found to have evolved from a simple system that worked. .... A complex system designed from scratch never works and can not be patched up to make it work. You have to start over, beginning with a system that works.« However, most published safety guidelines include »recommendations against developing ... replicator designs which permit surviving mutation or undergoing evolution«. Most assembler designs keep the »source code« external to the physical assembler. At each step of a manufacturing process, that step is read from an ordinary computer file and »broadcast« to all the assemblers. If any assembler gets out of range of that computer, or when the link between that computer and the assemblers is broken, or when that computer is unplugged, the assemblers stop replicating. Such a »broadcast architecture« is one of the safety features recommended by the »Foresight Guidelines on Molecular Nanotechnology«, and a map of the 137-dimensional replicator design space recently published by Freitas and Merkle provides numerous practical methods by which replicators can be safely controlled by good design. ** I was not merely referring to replication, thus reproduction, but also and especially to reproduction interest, when I said this: Nanobots are currently not completely capable of replication without Man's help. ** Do nanobots (nanorobotics) respective the molecular assemblers have an own interest in reproduction , so that they can decide on their own (!) to reproduce (replicate) themselves? That's So if a machine is an independent agent of evolution, then the decision and the execution of the replacement of all humans by machines is really self-made (thus: without any human help). Currently each machine is a dependent, thus not an independent agent of evolution. So currently the humans (and not the machines themselves), especially some humans, are still primarily responsible for the decision and the execution of the replacement of all humans by machines. Maybe this will change in the (near) future. At the end of this process the humans will probably (probability of about 80% [**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**) be replaced by machines. I know that Zinnat belongs to the no-sayers when it comes to answer the question of this thread: Will machines completely replace all human beings?. Whereas I am the 80%-yes-sayer. The point is the question of the costs; and the answer is: machines are cheaper than humans. I was speaking of reproduction interests, and (reproduction interest implcates to choose to reproduce or replicate and to choose to not reproduce or replicate). It is clear anyway that machines are faster, stronger, more intelligent, and more reliable. If they were not, then we would have no single machine and live like the people of the Stone Age lived. You want humans to be more valuable than machines yes, but that is more wishful thinking than thinking about the reality and the real or probable future. I mean it is possible to know something or even much about the current and the coming An own interest in reproduction or replication implies something like a simple stimulus-response mechanism or even a consciousness. All cells reproduce or replicate themselves, and the consciousness, if there is one, is able to influence the cells, to suppress the interest in reproduction or replication, to prevent the reproduction or replication (humans are an example for this kind of suppressing and preventing). Are machines already able to exactly do what cells do in the case of the reproduction interest? Is there already a stimulus-response mechanism in e.g. the nanobots? There are two levels of interest: (a) a kind of stimulus-sesponse mechanism as an interest, and (b) a conscious interest. With human help I meant the help by using the human consciousness (=> b) not the human stimulus-sesponse mechanism (=> a [for example in the human cells]). Coolness is not an issue (at least not for me) when it comes to buy a car. The old cars (e.g. those of the 1960s) were already perfectly functioning cars - coolness and too much electronics have nothing to do with perfectly functioning but merely with luxury. So when it comes to have perfectly functioning cars, cars with too much electronics (gps and so on) and coolness are not needed and oftener defect than the older cars were and are (!). Cars with too much electronics show what the future will be all about - therefore my question again: Will machines completely replace all human It is true that nanobots manipulate, because they can and they do it already in experiements and probably also in other situations. You can find many information about this theme in several books, on the internet, and sometimes also on the television Evolution takes place, if its three prnciples are fulfilled, regardless Evolving beings do not need to be independent in order to be part (namely a dependent part) of the evolution. Pets, for example, evolved and evolve because of the help of the humans. When humans send and machines recieve the signals, then machines evolve because of the help (e.g. sending signals) of humans, thus machines are under control of humans (programming, sending signals). I know that machines can do it by themselves, but they do not do it by themselves yet. Not yet. Currently the development is still at the first step, because machines are still under the control of humans, although the second step is already possible. The humans have become their own exterminators - this seems to be the human goal. Cyborgs are such a fundamental change that I would say that such a development is more evolutionarily than historically significant, and this does not mean that it is not historically significant. Currently it seems to be more probable that the machines and some machinable humans but not the traditional humans will be those super soldiers you are talking about (**). Please, do not misunderstand me, because I am not saying that that new tech (**) is not possible, but most of those news are nevertheless mere propaganda, and the machines will probably be superior to humans. Perhaps you will be able to buy you a human Aryan for your fun at home and a machine Aryan for your wars in the whole They invent, invent, and invent more and more stuff (mental illness is merely one of that nonsense stuff) in order to control more and more humans, at last 99% of all humans (before all humans will be replaced by In the same ways that technology development had the space race and nuclear arms race, a race for nanorobots is occurring. There is plenty of ground allowing nanorobots to be included among the emerging technologies. Some of the reasons are that large corporations, such as General Electric, Hewlett-Packard, Synopsys, Northrop Grumman and Siemens have been recently working in the development and research of nanorobots; surgeons are getting involved and starting to propose ways to apply nanorobots for common medical procedures; universities and research institutes were granted funds by government agencies exceeding $2 billion towards research developing nanodevices for medicine; bankers are also strategically investing with the intent to acquire beforehand rights and royalties on future nanorobots commercialization. Some aspects of nanorobot litigation and related issues linked to monopoly have already arisen. A large number of patents has been granted recently on nanorobots, done mostly for patent agents, companies specialized solely on building patent portfolio, and lawyers. After a long series of patents and eventually litigations, see for example the Invention of Radio or about the War of Currents, emerging fields of technology tend to become a monopoly, which normally is dominated by large corporations. ** What do you think about that? Is DARPA really independent from other military research and development and reports (**)? RM:AO is actually for DARPA (**)? Is or was ARPANET (the precursor of the INTERNET) the net of ARPA, later known as DARPA? The Advanced Research Projects Agency Network (ARPANET) was an early packet switching network and the first network to implement the protocol suite TCP/IP. Both technologies became the technical foundation of the Internet. ARPANET was initially funded by the Advanced Research Projects Agency (ARPA, later Defense Advanced Research Projects Agency, DARPA) of the United States Department of Defense. The first successful message on the ARPANET was sent by UCLA student programmer Charley Kline, at 10:30 pm on 29 October 1969, from Boelter Hall 3420. Kline transmitted from the university's SDS Sigma 7 Host computer to the Stanford Research Institute's SDS 940 Host computer. The message text was the word login; on an earlier attempt the l and the o letters were transmitted, but the system then crashed. Hence, the literal first message over the ARPANET was lo. About an hour later, after the programmers repaired the code that caused the crash, the SDS Sigma 7 computer effected a full login. The first permanent ARPANET link was established on 21 November 1969, between the IMP at UCLA and the IMP at the Stanford Research Institute. By 5 December 1969, the entire four-node network was established. The ARPANET was officially shut down 28 February 1990. The dasein / existence of the current machines is authentic. If the machines will remain as they currently are and humans will still live then, then the machines will perhaps cause an authentic dasein (existence, life) of the humans by use of SAM. But if there is a critical difference, if there are others who do not fit SAM, then they will try to infiltrate SAM and perhaps Based on the replacement of all humans by machines I am speaking of a 80% probability (**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**). 80% is a high probability, but not 100%, so 20% are left. Wworking towards eliminating the threat of human replacement? 1) Upheaval (but unfortunately it is not very probable). 2) Failure of the humans (it is very probable, but what follows depends probably on the case 1)). 3) Natural catastrophe (with reference to the near future it is also not very probable). Smiling machines? Okay, here they are: Maybe that robot (**) has currently not more skills than a child that was born 18 months ago; but note that this ontogenetic human development of 18 months corresponds with the phylogenetic human development of some million years; and if we say that the first steam engine was the birth of that robot, then it is now about 250 years old (because the first 250 years of a robot after its birth correspond with the first 18 months of a human after his/her birth), but the speed of its further development will probably become very much higher, because a pure cultural (technological) development can be faster than a pure natural development or even a mix of a cultural and a natural development (the human development is such a mix), so that robots will be soon capable of using language in a more efficient way than humans, for example; the question is, if robots will be able to do it with any emotion, with selfish. It is an error to believe that robots should have sex (excepted that they use sex for the control of humans). For robots themselves sex is a disadvantage. Possibly, the machines will prevent the extinction of all human beings, or they will not prevent but accelerate it. Will machineseat the crust of the planet? For someone who knows e Mendels laws and the resulting statistical distributions, the following hypothesis forces itself: Suppose the peak IQ occupational group would be homozygous for a Mendelian allele M1, thus genotype M1M1, the unskilled workers would be M2M2, the professional workers would be heterozygous, thus M1M2. People with a genotypic IQ over 123 should be homozygous M1M1, those with an IQ 105-123 should be heterozygous M1M2, and those with an IQ under 105 should be homozygous M2M2. In reality, the thresholds IQ 105 and IQ 123 mark no sharp boundaries but the average stripline of the overlapping zones of the phenotypes of the tested IQ. So mor lively worded, there are three types of modern humans: (1) those very few (with an IQ >= 124) who invent machines, (2) those (with an IQ 105-123) who repair machines, and (3) those great many (with an IQ <= 104) who serve machines. Now, guess whether machines are capable of replacing all three types of humans. Since machines have become capable of serving and repairing machines, less humans are needed. And in future machines will probably even be capable of inventing machines, then no human will be needed. But will the humans have to be intelligent then (provided that they will still be there)? Maybe there will happen some relevant events (for example the collapse of the Keynesianistic/Neo-Keynesianistic system [fiat money system], a disaster as a consequence of the global war, a natural disaster, the take-over of the androids) at almost the same Soon it will be possible to carry the zygote, the embryo, and the fetus in an extrauterine way. Then we will have totally arrived at the point of the more insecure, the more primitve reproduction again. We will no longer need any intrauterine thing then. The natural womb as the realization of the the natural idea to lay the egg into the inside of the body will have become obsolete. There will be no natural birth any longer, since the natural uterus will not be needed any longer. Humans will not be needed any longer - their natural reproduction will be replaced by genetic engineering and artificial wombs, their economical production will be replaced by machines and perhaps by artificial humans (cyborgs) .... So humans will either become artificial humans or die out. The logical consequence is either (a) becoming more and more artificial or (b) dying out. At the moment I do not see a solution that can avoid this logical consequence. Robot Reproduces on Its Own. Maybe the machines, the genetic engineering, and the cyborgization will lead us to the capability of ending history practically in the future. At the moment there is more war than ever before. The machines have to become more hominid, thus android, and the humans have to become more mechanical, thus cyborgical. Unfortunately, most humans are not interested or/and do not understand the technological development and its consequences. Most humans only begin with a little interest in technical things, if they can use them for themselves. Man created machines in order to rationalize and did not consider that this could mean being replaced not only economically but also biologically. The modern science is an Occidental science and has conquered the whole world. So even if the genocide will be continued and finally completed, the techn(olog)ical results of the Occidental science - especially the machines - will be there, and then it will depend on the Non-Occidentals or the machines whether science will be continued Maybe science will die in the same manner as Faust in the second part of Goethes tragedy Faust. Was gibt es denn? // WAGNER (leiser) : Es wird ein Mensch gemacht. WAGNER : So muß der Mensch mit seinen großen Gaben // Doch künftig höhern, höhern Ursprung haben. HOMUNCULUS (in der Phiole zu Wagner) : Nun, Väterchen! wie stehts? es war kein Scherz // Komm, drücke mich recht zärtlich an dein Herz. WAGNER (betrübt) : Am Ende hängen wir doch ab // Von Kreaturen, die wir machten. - Johann Wolfgang von Goethe, Faust (II), S. 114, 115 und What is happening? // WAGNER (quieter) : A man is being made. WAGNER : So man with his great skills shall have // To have a higher, higher origin in the future. HOMUNCULUS (in the phial to Wagner) : Well, Daddy! hows things? it was no joke // Come, press close to my heart tenderly. WAGNER (saddened) : In the end, we do depend // On creatures that we made. - Johann Wolfgang von Goethe, Faust (II), p. 114, 115 and 122. The shadow cabal and the policymakers are always interested in in the option of not wanting any majority to know what really happens, so that the majority with its common understanding does not know what really happens. I think that the political interest in the capabilities of machines is high, but it is not politically correct to talk as much about that theme as the common understanding becomes capable of estimating the capabilities of machines in the right way. There is always an interest in the option of not wanting any majority to know what really happens. This may lead to the following answer: currently, the capabilities of machines are over- and underestimated, namely overestimated by some and underestimated by many people. And at last a thing or many things, an individual or many individuals as a species are repaired (A), or replaced (B), or rotted (C), or, if time is considered too, repaired and then replaced (A and B), or repaired and then rotted (A and C), or replaced and then rotted (B and C), or repaired, replaced, rotted (A, It is possible, that there will be no human culture anymore but only a machine culture. So that the humans will only have a chance if they will coexist in the sense of an adaptation to the machines (and not the other way around). In an economical sense, the Industrial Revolution means this: Human beings are needed in order to replace them by machines till the time when they will not be needed. So the Industrial Revolution seems to be a paradox when it comes to the general development of human beings. Economization as a rationalization seems to contradict the evolution of human beings. This paradox or contradiction can only be solved, if we interpret our machines as something that can dominate us. What shall we do? It is unlikely that we will be able to get before the Industrial Revolution, unless we will have a global dictatorship that will forbid machines or a natural catastrophe will lead to the extinction of all intelligent machines and the survival of a few human beings. Shall we accept that machines will dominate us? Based on the replacement of all humans by machines I am speaking of a 80% probability (**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**|**). 80% is a high probability, but not 100%, so 20% are left. I hope that the future will show us what Günther has foreseen and thought. I also think that humans have no other chance in the long run than the said human-machine-coevolution. But if you compare humans with animals, then you will find that humans are the most terrible predator on this planet, much more terrible than all predators together; and if you compare some humans with other humans, you will always find that some are much more terrible than others. Think of the unconditional will to power, the greed, the fate between wanting to be like an almighty God and having to be like an almost powerless animal, which means the incapability of being like God and of being like an animal. Humans are no gods and no animals, they are somewhat between them, and that is their fate. Shouldnt we just destroy all machines? Its hardly possible, isnt it? And if its possible, it leads to war, doesnt it? But war is something that we get in any case, dont we? The definition of a responsible ruler (**)? Somebody who really decides and acts as ruler responsiblly, according to Kants categorical imperative. A responsible ruler is never corrupt, is never greedy, is never a godwannabe. But most modern politicians are just what responsible rulers can never be, and the other few modern politicians have no chance to become rulers. Most modern politicians have as real rulers never shown real responsibility, and the other few modern politicians have never become real rulers. What should and would a responsible ruler do if machines were replacing A responsible ruler would destroy all bad robot power sources so the machines would stop running and stop replacing humans But would this ruler still be a responsible ruler then, if many people said we want to be replaced by machines?M aybe or even likely, because many people do not know what is better for them and what not. So a responsible ruler must also be a wise one with foresight. He must know what is the best for the people now and in future, despite of the fact that many of them do not know this. But this could and probably would lead to the fact that the responsible ruler gets fired by the people, at least in democracies. So this leads to the unavoidable conclusion that democracy may be not good for many people. Probably democracy (at least as we know it today) is not really or not merely meant for the demos, but for those irresponsible rulers who rule because of their money, because the money has made them powerful. In any case, this circumstances are typical for modernity. Howsoever, a responsible ruler would - because he should - destroy all bad robot power sources so the machines would stop running and stop replacing humans. Why? It is the only chance to save humans in that case, otherwise the ruler would not be a responsible One of many, many examples: Food Manufacturing: McKinsey Report: Where will Automation Replace Humans in Food Manufacturing At last, the percentage of the replaced humans will be 100% everywhere, if this development will not be stopped. In the past, it was said that machines would not replace humans who serve, repair and invent machines. Now, most of these humans are already replaced by machines. Machines need resources too. Similar to living beings, they will tend to eradicate all other competitors. If there will be no sudden incident that will change this trend, then the machines will replace humans. My philosophy has fundamentals that can empirically also be found in nature, a.k.a the universe, and theoretically also be found in cosmology and geology: (1) actualism, (2) exceptionalism, (3) cataclysm, (4) cyclicism. We can hope that exceptionalism (=> 2) and cataclysm (=>3) will help us somehow, because we do not want to be completely replaced by machines. Humans and machines are in state of competition, and many of the humans help the machines to win this competition in a similar way as the white humans help all other humans to eliminate the white humans, although or because the white humans have brought the progress to all humans, thus also to the non-white humans. And now white humans as the inventors of machines are not needed anymore, since other humans and even machines can already invent machines. This situation seems to be paradoxical. There is the same seeming paradox between two groups of humans too: Those who give benefit and help and those who get this benefit and help. The disappearance of those who give benefit and help is affirmed by those who get this benefit and help from the former. So, this is in spite of the fact that the latter are benefitting and getting help from the former. This seeming paradox can be solved, since those who give benefit and help are too expensive and not needed any longer, and those who get benefit and help are still cheaper and still needed (this will likely change in the future too). There is a similar seeming paradox between machines and certain (and later likely all) humans. So, not only can and do e.g. feminists and islamists or e.g. white white-haters and non-white white-haters have the same enemy, this can and do e.g. intelligent machines and stupid people too. They all have only one enemy: the white men. Why should machines not do what living beings do? Machines are products of humans. Being like purely rational humans, machines are more rational and thus more efficient than humans. Humans are not purely rational, but only relatively rational, since they are emotional too. So, the sentence humans invented machines can be interpreted as humans invented purely rational humans who lack a biological system. This purely rational humans who lack a biological system are the machines. If they get a biological system, then they are merely androids, not humans. And if humans become more like machines, they are merely cyborgs, not machines. Maybe humans and machines will become similar to each other in the future, but they will never become the same. The only chance for the humans survival in the future will be to get more and more similar to the machines, because otherwise humans will likely disappear. Humans tend to destroy their environemt, tend to destroy nature, tend to eradicate their competitors. Machines as the product of humans tend to do the same. The difference is that machines are capable of doing this much more effectively than humans. If they will do it, is a different issue. What I have said is that there is this tendency to eradicate all competitors. | Phase / stage |Average economic status (living standard / wealth / welfare) When the third phase will end? What I know for sure in this case is that the third phase will end with the end of the average high If the average machine rate will remain high and the average fertility rate will remain low, but the average economic status will shrink, then it will become clear that machines are in the long run a bad The shrunken average economic status will perhaps (thus: not certainly) cause a shrinking average machine rate. The answer to the question whether the average machine rate will shrink then or not will probably depend on the development of the machines. If they will not sufficiently enough be developed then, then the average machine rate will certainly shrink. But the crux is that the humans will try to avoid a shrinking average economic status, although, if they will do, this will lead to an even higher average machine rate and at last to the extinction of all humans. Nevertheless, there are many reasons to believe that the average economic status will shrink and cause a shrinking average machine rate. Like I said: I know that the everage economic status will shrink, but I do not know whether this will really lead to a shrinking average machine rate or not, since the development status of the machines at that time in the future is currently quite unknown. Rodney Brooks wrote: The Seven Deadly Sins of Predicting the Future of AI. .... Today, there is a story in Market Watch that robots will take half of todays jobs in 10 to 20 years (**). It even has a graphic to prove the numbers. .... For instance, it appears to say that we will go from 1 million grounds and maintenance workers in the US to only 50,000 in 10 to 20 years, because robots will take over those jobs. How many robots are currently operational in those jobs? ZERO. How many realistic demonstrations have there been of robots working in this arena? ZERO. Similar stories apply to all the other job categories in this diagram where it is suggested that there will be massive disruptions of 90%, and even as much as 97%, in jobs that currently require physical presence at some particular job site. ** The ecological catastrophe is merely one of some more options for the end of humans and many other living creatures. Other options are, for instance, military catastrophes, technical/scientifical research catastrophes, natural catastrophes. But all these catastophes will not mean the end of the world and not mean the end of this planet either; but they will just mean huge catastrophes for humans and many other living creatures. Johann Wolfgang von Goethe wrote: Am Ende hängen wir doch ab, von Kreaturen, die wir My translation: At the end, nevertheless, we depend, on the creatures which we made. The inevitable way that modernity has been following since the so-called industrial revolution has reached a point where the very much accelerated development gets even more accelerated. Can we slow down the modern velocity? If the answer is no, what does this mean for the future? It is likely that this does not mean that the people will sit there and be sad, no, they will likely be happy according to their situation in general and their spiritually limited capabilities in particular. One of the examples could be a cyborg with many artificially optimized muscles and joints but a consciousness that does not work better and more than the consciousness of a dog. The above example was already anticipated in the last decades of the 18th and the first decades of the 19th century when Johann Wolfgang von Goethe studied the real Faust, designed his story of Faust and wrote it down. And his Faust of the second part died, because he had reached his goal. Another example could be the humans of the end of history as Georg Wilhelm Friedrich Hegel described it in his main work Phänomenologie des Geistes, published in 1807. A further example could be the last men who became famous in 1883: »Wir haben das Glück erfunden« sagen die letzten Menschen und blinzeln. - Friedrich Wilhelm Nietzsche. (»We have discovered happiness« say the last men and blink.) However, these humans or post-humans will not be unhappy. They will live without history (**) and without work. We - the current humans - think that this will be very sad, and we are absolutely right about that. But they will not think so. And they will not think much but believe much on a very low level. They will be almost absolutely dependent and very naively believe in the opposite. To me, this development is the most probable one for the future (although presignals have been becoming apparent since the beginning of our modernity). One of the presignals of this situation in the future is the increasing replacement of the human nature by the artificially human technology, thus machines. The humans will have merely two options or chances in order to stop the continuation of this development or, at least, to slow down the modern velocity. One option or chance is the avoidance of the complete replacement of humans by machines, because this complete replacement will lead to the lost of the human control over the machines, so that the machines will control or even kill the humans. The other option or chance is a huge catastrophe in the very near future that will lead to a new beginning, provided that there will be enough survivors of that catastrophe. The difference between this two scenarios and the most probable scenario is that the humans will not lose their relatively freedom and the extent and independence of their consciousness in the case of the said two scenarios and the exact opposite in the case of the most probably scenario. What will be the worst case scenario then?
0.6836
FineWeb
```json [ "Machine Replacement of Humans", "Human Evolution and Technology", "Rise of Artificial Intelligence" ] ```
Varnish Potential Analysis Varnish is an insoluble film that coats the internal components of machinery and can devastate production. Varnish deposits are sub-micron in nature and cannot accurately be measured using standard oil analysis tests. In 2006, TestOil developed a solution to measure and report varnish potential for our customers. Today, our Varnish Potential Analysis is the industry leading solution for varnish risk applications. What is Lube Oil Varnish? Varnish is an insoluble film composed primarily of organic residue that coats the internal components of machines lubrication systems. Often comprised of degradation products associated with oxidation or thermal degradation, these soft contaminants are less than 1 micrometer in size. Causes of Lube Oil Varnish Oxidation: a chemical reaction of an oil molecule with oxygen to produce carboxylic acids. Thermal degradation: a chemical change in the base oil molecules under high temperatures. Specific mechanisms of thermal degradation - Micro-dieseling, Electrostatic Spark Discharge and Hot Spots. Benefits of Varnish Potential Analysis Testing: - Alerts customers of developing lube oil varnish problems - Measures oil’s propensity to drop out varnish deposits - Promotes proactive maintenance and contamination control - Provides turbine users confidence at start up Damage Done By Lube Oil Varnish - Reduced clearance zones affecting lubrication regimes. Often, this means a transition from hydrodynamic lubrication to boundary lubrication, which increases wear rates of pumps, bearings and gears. - Increased friction in components. This friction will result in higher energy requirements and can cause valves to stick or seize. - Higher operating temperatures. Varnish acts as an insulator, lowering the effect of heat exchangers and lessening the ability of the lubricant to cool. The impact of friction and reduced clearance zones also contribute to higher temperatures. - Restriction or impedance of oil flow. Varnish can cause valves, strainers and filters to clog. - Increased wear rates. Varnish captures hard contaminants creating an abrasive surface that will accelerate wear. Varnished surfaces often appear like sandpaper when examined under a microscope. How TO Outsmart Varnish and Avoid Costly Downtime In this paper you will learn about the Varnish Potential Analysis test and why it is a necessary tool for any lube system that is prone to varnish. By monitoring the contaminants responsible for varnish, reliability managers and maintenance planners can implement appropriate corrective actions before costly damage occurs and unnecessary downtime is experienced.
0.9621
FineWeb
["Varnish Potential Analysis", "Causes of Lube Oil Varnish", "Benefits of Varnish Potential Analysis Testing"]
I have an Asus N16 running 1.28 ND beta. I have various commands and iptables rules in the init script and the wan up script. They establish my 5 static ips and routing rules including sending traffic from a particular subnet out as a particular outbound IP. If I reboot the router, it all comes up fine. But, for example, if I change the color of the admin interface, it drops a bunch of my rules. It's as if it's restarting something that clears out iptables and DOESN'T rerun script_init.sh or script_wanup.sh. Any ideas?
0.8283
FineWeb
``` { "topics": [ "Router Configuration", "iptables Rules", "Script Initialization" ] } ```
Addison’s disease in dogs was first officially recognized in 1953 (although it was seen in humans around the middle of the 19th century and named after Thomas Addison). This condition which is also known as Hypoadrenocorticism occurs when the affected dog’s adrenal glands (adrenal cortex) stop producing enough ‘steroid’ hormones including Cortisone (Cortisol) and Aldosterone. Both of these hormones are vital as Aldosterone helps to regulate the amount of potassium, water and sodium in the dog’s blood stream and Cortisone helps to reduce inflammation, deal with shock, stress and act as a natural pain killer. Cortisone (Cortisol) is also necessary for the production of glucose. What is the difference between Cushing’s disease and Addison’s disease? To put this simply, Cushing’s disease is caused by an overabundance of adrenal hormones being manufactured and Addisons disease is caused by a deficiency of these hormones being manufactured. Symptoms Of Addison’s Disease in Dogs As a consequence of the dog’s body not producing enough Aldosterone and Cortisone the electrolytes in the dog’s blood stream including calcium, potassium, chloride and sodium start to get out of proportion in relation to each other causing the dog to become very unwell. It is also important to mention that Addison’s disease in dogs is a very serious condition which can be fatal – it can also be misdiagnosed at times due to the early clinical signs being similar to digestive problems, infectious diseases and kidney disease. Addison’s disease tends to be more common in young to middle aged female dogs with the average age of onset around the age of four. Some breeds are more prone to the condition including Rottweilers, Standard Poodles, Great Danes, Bearded Collies, Chinese Crested Dogs and Portuguese Water Dogs. Dogs that experience Cushing’s disease or Addison’s may develop an increased thirst so this needs to be taken into consideration when you fill up the water bowl. To give you some idea about how much water a dog needs, experts estimate that dogs need about 10 ml of water per pound of body weight (this is just to maintain them on a daily basis). However, if your dog is more active or has health problems then the amount of water provided for your dog will need to be increased. Some of the symptoms of the condition can include… - Loss of appetite and associated weight loss. - Weakness and lethargy. - Muscle weakness. - Diarrhea and loose stools. - Excessive drinking and urination. Sometimes owners are not able to notice a specific pattern in relation to the symptoms as the symptoms can be sporadic in nature. This can have an impact on the treatment of the condition as diagnosis can occur later than would be recommended. Is the condition more common in certain breeds…? The simple answer to this is yes. It is also more common in females than males. Some of the breeds that seem to be more predisposed to the condition include Westies, Rottweilers, Great Danes, St Bernards and Springer Spaniels (this is not an exhaustive list). The average age of onset for the disease is around five years of age and it is estimated that 70% of dogs that are affected are female. Causes Of Addison’s Disease In Dogs There are numerous causes of this condition including… - Secondary – The condition can be caused by illness, a tumor or due to an infection. - Autoimmune – This is another cause of Addison’s disease in dogs where the antibodies start to attack the dog’s adrenal cortex. - Genes – As mentioned above there are some breeds that seem to be genetically predisposed to developing the disease. - Lysodren (also known as Mitotane) – If a dog is being treated for Cushings disease and has been over treated with Lysodren this can lead to irreversible adrenal damage leading to Addisons disease. - Iatrogenic – This cause of Addison’s disease in dogs occurs as a result of medical treatment (basically the condition is caused by the Vet treating the dog for too long with Corticosteroids). Due to the prolonged treatment of Corticosteroids the adrenal gland will begin to atrophy (waste away or shrink). What is an Addisonian crisis? This condition occurs when a specific medication or drug is discontinued too abruptly leading to a temporary lack of hydrocortisone – this can then lead to circulatory collapse and shock. Diagnosis and treatment for Addison’s Disease in dogs This condition is normally diagnosed through an ACHT test (the same test that is also used to help diagnose Cushing’s disease). When a dog is experiencing Cushing’s disease and is then injected with ACHT the cortisol in the affected dog’s body starts to increase. However, when a dog suspected of suffering from Addison’s disease is injected with ACHT then the cortisol is not stimulated and does not increase. The simple reason for this is that the dog’s adrenal glands are unable to produce enough cortisol to respond to the ACHT stimulation test. Blood tests can also be undertaken to determine whether the dog is suffering from the condition. For example some dogs that are experiencing the condition may appear anemic due to there being a lack of cortisol – which will then start to suppress the dog’s bone marrow. Another common sign seen in blood tests is for an affected do to have low blood sugar levels or hypoglycemia (this is because cortisol is very important in the manufacture of glucose). Blood tests may also show electrolyte imbalances – for example the potassium levels appear higher than what is normal and the sodium levels may appear lower. The treatment for the condition will involve the vet replacing the hormones that the dog’s body is unable to produce. If the dog has developed shock then the vet may treat the dog with intravenous fluids and corticosteroids. To help replace the cortisone the vet may prescribe Prednisone and to replace the mineralcorticoids the vet may prescribe Fludrocortisone acetate. Dogs suffering from this condition may benefit from natural herbs (alongside any conventional treatment). Some of the herbs thought to be beneficial in eliminating some of the symptoms of addisons disease include ginger, garlic, licorice, slippery elm, valerian, arsenicum (30 C), arctium laps and astragalus membranaceous. However, you should never give your dog any supplements without first speaking with a vet.
0.516
FineWeb
``` { "topics": [ "Addison's Disease in Dogs", "Symptoms and Diagnosis", "Treatment and Management" ] } ```
When car makers first started exploring drive-by-wire systems, the identified shortcomings of available vehicle protocols led to a search for a high-speed, fault-tolerant protocol. Initially, consideration focused on TTP/C (Time-Trigger Protocol) that was already used in aerospace and, more recently, on FlexRay and TTCAN. All of these protocols rely on a time-triggered approach using Time Division Multiple Access (TDMA). The jury is still out on whether any will see wide-spread usage in automotive. A new alternative, called FlexCAN, may provide an easy transition to network communications for in-vehicle safety critical systems. For an ideal in-vehicle protocol, four items are required: deterministic behavior, speed, dependability and flexibility. TTP/C is the least flexible. This lack of flexibility led to the development of FlexRay. Both TTP/C and FlexRay have a similar speed capability, which is in the neighborhood of 12 Mbits/sec. In contrast, the predominant vehicle protocol, CAN, has a maximum data rate of 1 Mbits/sec but it has been extremely dependable and flexible. Efforts to add a TDMA structure to CAN (as in the TTCAN proposal) have compromised one of its best characteristics ó error recovery. As a result, TTCAN has attracted few potential users. In contrast, FlexCAN keeps the best features of CAN and adds a time-triggered aspect that does not use TDMA. In addition, FlexCAN also adds some dependable features to CAN making it more suitable for safety-critical systems. By adding a reference time structure, many aspects of CAN are simplified ­ó especially those pertaining to safety. The structure does not rely on clock synchronization because the granularity of the time-triggered mechanism does not require it. In FlexCAN, the information goes on two channels for replication, but could go on several channels simultaneously. Since most of today's microcontrollers (MCUs) have two CAN channels, FlexCAN can be implemented on available hardware. Similar to the way that TCP/IP resides on top of Ethernet to provide reliable data transfer in the Internet architecture, FlexCAN is an additional layer on top of the standard CAN protocol to provide even more dependable services. From the protocol's perspective, this layer could be implemented in firmware and/or software. The physical layer and datalink layers (OSI Layers 1-2) are CAN so the protocol's maximum speed is still 1 Mbits/sec. This will not handle the higher-speed requirements that have driven the high-speed aspect of time-triggered protocols. However, for safety critical sub-networks, it could extend the useful life of the rugged CAN bus for many years to come. FlexCAN adds layers to CAN building on a proven protocol, the way the internet relies on the Ethernet for layers one and two. CONTACT: Juan Pimentel, professor, Kettering UniversityTel: 810-394-7144; e-mail: [email protected] Get more information on FlexCAN
0.6786
FineWeb
``` { "topics": [ "Vehicle Protocols", "FlexCAN", "In-Vehicle Networking" ] } ```
Modelling the reality of business In 2003, a software engineer named Eric Evans, who had spent many years guiding large businesses through the process of building software, published a groundbreaking software design book in which he introduced an approach he called domain-driven design. The idea was the result of thinking about what actually led to success in his business projects: fruitful interactions with the client, analysis of the business problems being solved, building teams which thoroughly understood both the business and the software, and the resulting software architecture. The focus of all these components is the business’s ‘domain’ – in other words, its sphere of knowledge or activity. Today, domain-driven design is used worldwide for complex projects where software is modelling a real-world system or process. ‘It’s completely different to the classic approach to software architecture, where we design with the database at the core and let the requirements of the database drive the way you design the rest of the system,’ says Mark Tolley, a senior consultant in OCC’s Innovation Delivery Team. ‘Instead, we come up with a tool that helps the business drive its function: the focus shifts onto the business logic. We simplify the system by first modelling a small part of the business and then evolve the model by talking extensively to the client about what actually happens and what they need.’ In essence, domain-driven design allows the development of complex systems whose focus is mapping activities, tasks, events and data within a problem domain into the technology artefacts of a solution domain. A vital aspect is collaboration between the domain experts – the people who know the business – and the software architects in order to create a “ubiquitous language” so that everyone involved is discussing a shared knowledge base with shared concepts. ‘Because everyone agrees exactly what the terms being used mean, we can have a more productive conversation and evolve software that reflects our improved understanding until it exactly matches the client’s requirements,’ says Tolley With business needs at the heart of a hierarchical software structure, the system can both present users with highly intuitive interface screens and respond flexibly to changing client requirements. Maintainability is also excellent: new technology can simply be bolted onto the ‘front end’ without disrupting the business logic. OCC has recently used domain-driven design to create TRALC3, the latest generation of its modelling tool for thermal stress in National Grid transformers. ‘The TRALC tools were designed some years ago to allow system operators to model scenarios for individual transformers being put under heightened loads. Nowadays, transformers are increasingly sophisticated with more control options and developments like self-triggering cooling mechanisms,’ explains Tolley. ‘By talking to the operators about exactly what information they now require and using domain-driven design, we’ve been able to develop a more flexible application that can answer their more nuanced questions in a very accessible way.’ He adds: ‘That’s the beauty of domain-driven design: we can make the things that matter to our clients easier.’ Leave a Reply
0.9655
FineWeb
``` { "topics": [ "Domain-Driven Design", "Software Architecture", "Business Modelling" ] } ```
Seasonal affective disorder (SAD) or what is now called major depressive disorder with seasonal pattern (MADS), is a type of depressive episode that individuals describe experiencing during certain times of the year (majority of the cases occur during the fall or winter). Individuals who report difficulties with SAD/MADS often describe feeling excessively tired, depressed, and irritable. Some other symptoms that individuals experience include difficulties concentrating, trouble sleeping, and muscle aches/pains. According to Fiona Marschall and Peter Cheevers’ text, Positive Options for Seasonal Affective Disorder, light therapy combined with counselling is considered to be the most effective method for treating SAD/MADS symptoms. Marchall and Cheever’s suggest that SAD/MADS symptoms are thought to be caused by a biochemical imbalance in the brain (lack of serotonin). Research suggests that emotional stress during the winter months may trigger the depression associated with SAD/MAD. If you’d like to find out more information about how our Vaughan or Maple counselling centre can help, please feel free to give us a call at 416-999-343 or fill out the appointment request form. Seasonal Affective Disorder/Major Depressive Disorder With Seasonal Pattern Our mission is to provide innovative, effective, and empirically supported counselling and educational services to individuals, couples, and families who experience mental/physical health issues and/or value personal growth and development. "I’m so grateful to have had the opportunity to finally come to counselling. My life has changed for the better. I definitely recommend their services."
0.5604
FineWeb
```json [ "Seasonal Affective Disorder (SAD)", "Symptoms of SAD/MADS", "Treatment of SAD/MADS" ] ```
Submitted to: American Society of Plant Biologists Annual Meeting Publication Type: Abstract Only Publication Acceptance Date: 4/20/2009 Publication Date: N/A Citation: Interpretive Summary: Technical Abstract: Plant seeds accumulate phosphorus in the form of myo-inositol-1,2,3,4,5,6 hexakisphosphate, commonly referred to as phytic acid. Phytic acid is complexed with cationic mineral species in the form of phytate, which is not well digested or absorbed by monogastric species such as humans, poultry and swine. As a result, soybean has an effective deficiency of phosphorus and other minerals, despite high levels of these components in the seed. Excreted phytate can also contribute to phosphorus contamination of groundwater and eutrophication of freshwater lakes and streams. In maize, a recessive mutation in a conserved region within the low phytic acid 1 (lpa1) gene is responsible for the low phytic acid phenotype. We have identified recessive mutations in two soybean homologues of the maize lpa1 gene in CX1834, a soybean line with a low phytic acid phenotype derived from EMS mutagenesis of a breeding line with normal phytate levels. In three populations analyzed, we identified complete association between homozygosity for mutant alleles of the two lpa1 homologues and the low phytic acid phenotype. Molecular marker assays were designed that can be used to directly select for the mutant alleles that control the phenotype. The identification of the molecular basis for the low phytic acid phenotype will dramatically ease the introgression of the low phytic acid trait into elite soybean cultivars. The ultimate goal of such introgression is soybean-derived food and feed which require less nutrient supplementation, are more nutritious, and are more environmentally friendly.
0.5115
FineWeb
["Phytic Acid", "Soybean Genetics", "Plant Nutrition"]
What is push marketing? Why is it relevant for marketers to understand it? Brent Hieggelke and Trevor Westerlund recently gave a MarketingProfs class, sponsored by Urban Airship, to explain push marketing and its power. And MarketingProfs was there to take visual notes of the seminar. Clipping is a handy way to collect important slides you want to go back to later.
0.8651
FineWeb
["Push Marketing", "Marketing Strategies", "MarketingProfs Seminar"]
History of Chemistry in Cork Chemistry in Cork has long and deep roots which date back as far as Robert Boyle (1626 - 1691). Boyle, who was a son of the first Earl of Cork and was born in Lismore near Cork , was one of the great early scientists. His experimental findings brought to an end the era of alchemy and led to the foundation of modern chemistry. The Royal Cork Institution and Queen's College Cork Boyle's influence in Cork persisted long after. A scientific institute that had been founded earlier was incorporated by royal Charter in 1807 to become the Royal Cork Institution (1802-1849). It was subsidised by a descendent of Boyle. Four professors were employed, including a Professor of Chemistry. One of the holders of this professorship was Edmund Davy, a cousin of the famed Sir Humphry Davy. While in Cork, he developed his research on the chemistry of platinum, water hardness, and the chemistry of foodstuffs such as milk and flour. When financial difficulties led to the demise of the Cork Institution, the strong reaction among the business community in Cork led to the promotion of a scheme for provincial colleges with power to confer degrees. The past success of the Cork Institution paved the way, and in 1845 the Colleges Bill established the three Queen’s Colleges at Cork, Belfast, and Galway. The 20th century Although the first President of Queen’s College Cork, Sir Robert Kane FRS, was a noted scholar in both medicine and chemistry, the activities of Chemistry were initially confined to training medical and engineering students. With the introduction of the Universities Bill, a Faculty of Science was instituted and a new building for Chemistry and Physics was completed in 1910. Between 1910 and 1960, Chemistry had a succession of professors, but it was not until the appointment of Francis Leslie Scott that major changes occurred. It was mainly his success and drive as an academic that led to the completion of the current Science Building in 1970. Chemistry now expanded rapidly with the appointment of professorships in organic, physical, inorganic, and more recently in analytical chemistry and in pharmaceutical chemistry. Its graduates were of such calibre that many international pharmaceutical companies opted to manufacture in the greater Cork area on a large scale, with numerous Cork graduates appointed to academic posts throughout Ireland. The last decade has seen a robust expansion of the department's facilities and personnel, in line with the booming Irish economy and the need for a more highly trained workforce. The need for more research space has been met by the construction of the new Cavanagh Pharmacy building and the Environmental Research Institute, both of which have outstanding laboratory facilities. New personnel have added strength to the school's research programmes: in addition to our long-standing excellence in synthetic work and close links with pharmaceutical chemistry, the department has built up new areas of strength in materials chemistry and nanoscience, separation science and chemical sensors, and atmospheric chemistry. Chemistry continues to be a leading discipline at the University College Cork and is recognised internationally for the calibre of its graduates and research. Clockwise from top left: Tyndall National Institute, Cavanagh Pharmacy Building, Environmental Research Institute, Kane Science Building.
0.5066
FineWeb
``` { "topics": [ "History of Chemistry in Cork", "The Royal Cork Institution and Queen's College Cork", "The 20th century" ] } ```
Caillois thought that the adaptive mimicry of certain types of mantis "illustrated, sometimes hauntingly, the human desire to recover its original insensate condition, a desire comparable to the pantheistic idea of becoming one with nature, which is itself the common literary and philosophical translation of returning to prenatal unconscious." He would develop this idea in a later piece for Minotaure, "Mimicry and Legendary Psychaesthenia" (1937). Here Caillois provides a brief overview of animal mimicry: a caterpillar that can imitate the head of a snake, insects with wings adorned with "eyes" which supposedly startle predators, seaweed-like fish, mantises disguised as flowers. He finds accepted theories of mimicry wanting, and concludes that it is inefficient either as an offensive or defensive weapon, citing, for instance, the fact that most predators do not hunt by sight, as well as the case of the Phyllidae insects, which mimic leaves so convincingly they browse on each other. Instead, he turns to an unlikely source for an explanation: mimetic magic, and the principle of correspondence: The law of magic, Things that have once touched each other stay united, corresponds to the principle of association by contiguity, just as the principle of association by similarity precisely corresponds to the attractio similium of magic: Like produces like. Hence, identical principles govern, on the one hand, the subjective association of ideas and, on the other, the objective association of phenomena; that is, on the one hand, the chance or supposedly chance links between ideas and, on the other, the causal links between phenomena. Mimicry must be, Caillois argues, "a disorder of spatial perception" or lure of space, which he terms "legendary psychaesthenia." He compares it to the dissociation between mind and body in schizophrenics, for whom "space seems to constitute a will to devour." (Caillois would later write that he found these ideas far-fetched.) In natural mimicry Caillois saw equivalents to human fashion, carnival theatre and ceremony, but denied that this projection was a case of anthropomorphism; rather it was exactly the opposite. It should be realised that the point is not to explain certain puzzling facts observed in nature in terms of man. On the contrary, it is to explain man (governed by the laws of this same nature, to which he belongs in almost every respect) in terms of the more general behavioural forms found widespread in nature throughout most species. This attitude prompts one to greatly vary the principles of biological explanation and to assert that nature (which is no miser) pursues pleasure, luxury, exuberance, and vertigo just as much as survival. In her introduction to The Writing of Stones, Marguerite Yourcenar describes Caillois' theories as "an inverted anthropomorphism in which man, instead of attributing his own emotions, sometimes condescendingly, to all other living beings, shares humbly, yet perhaps also with pride, in everything contained or innate in all three realms, animal, vegetable, and mineral."
0.8307
FineWeb
```json [ "Mimicry in Nature", "Human Desire and Mimicry", "Theory of Legendary Psychaesthenia" ] ```
All subscribers get a free copy of the novel, Mencken and the Monsters. In the newsletter you will get: - Links to everything Jeff has published that month, - Occasional free things Jeff only gives to newsletter subscribers, - Links to books, TV shows, and music Jeff has been enjoying, - A link to a Spotify playlist of music Jeff has been writing with that month, - Updates on projects Jeff is working on, - And stories about Jeff’s family and their adventures.
0.9754
FineWeb
```json [ "Newsletter Content", "Free Gifts and Offers", "Author Updates" ] ```
In effort to learn different income producing options strategy I have been reviewing various strategies and deciding whether they are appropriate for me to actually trade. Lat week I reviewed the very popular Strangle Option Strategy and prematurely stated, it isn’t something that I would be interested in selling any time soon. It took another blogger, JC from Passive Income Pursuit, to force me to look at the strategy from a different angle. His single comment gave rise to me selling strangles! How Does Selling a Strangle Option Work? While my post defining a strangle option went into much more detail, a Strangle, is actually a pretty easy strategy to understand if you have a basic knowledge of options. When a person writes a strangle they are: - Selling an out of the money put - Selling an out of the money call - The seller is collecting a premium taking the position that that the stock won’t go below a price certain (in the above example that number is $35/share); and - The seller is collecting a premium taking the position that the stock won’t go above a price certain (in the above example that number is $45/share). So, as long as the stock stays within the range above ($35 – $45) the strangle seller receiving two premiums. Why Was I Against Selling Strangles At First? As soon as I started researching the strangle, I thought to myself that the strategy was not for me for one reason that I explicitly shared, I am not sure why a retail investor would sell a short strangle. The risk seems absolutely ludicrous. I am sure there is an application for professionals, but the idea of guessing whether a company is going to stay lower than a particular strike price (selling to open a call) seems like a very, very risky proposition. Put a different way, I do not understand selling uncovered calls. If a stock goes above the strike price, I am just basically paying out of pocket the difference between the actual stock price and the strike price. I look at this differently than being assigned a put since if I were to “lose” that position at least I am holding 100 shares of a company. How was I convinced to Write Strangles? A few hours after that post went live JC wrote, …I’m a bit surprised that a short strangle doesn’t sound like something you’d be willing to do. Unless you’re using options solely to exit positions/increase income via covered calls or cash secured puts in order to enter positions, I don’t see why a short strangle is all that different from just selling a call or put…I’ve also been doing covered strangles in order to generate more option income/premium or reduce my cost basis on positions where I own 100 shares. So far it’s worked out pretty well. It may seem elementary for experienced traders, but JC provided me with a paradigm shift. I was looking at the strategy with a bias against selling uncovered calls, but he was completely correct – I was holding a few positions wherein I was assigned shares at a paper loss, so why not take a bet on both sides of those companies? From what I figure there are three mutually exclusive outcomes: - The stock stays within the predefined range and I keep both premiums - The stock drops and I am assigned new shares reducing my cost basis (remember I already own shares) - The stock pops and I keep the premium and I get out from under the position Highlighting my First Few Strangle Sales The First trade I did was to pair up a covered call I already had on the books. At some point in the past (timing doesn’t really matter for this post) I was assigned AOBC (Smith and Wesson) shares at a cost basis of $23. Since that assignment I was selling covered calls little by little to make up the difference between the current price and the assignment price. Well, with JC’s comment it made me realize, if I am long on the company why not pair it up with selling a put. Again, if the stock is put to me all it does is reduce my cost basis since I do not have any plans of unloading the stock anytime soon. The second trade I did was a true (covered) strangle. Currently, I am holding a few hundred shares of URBN, but only did one trade (100 shares) wherein I sold two separate contracts: - May 12 $20 Put for $.18 - May 12 $26 Call for $.12 The reason I only did 100 shares was that my basis in the 400 shares is in the 30s, so I didn’t want the whole thing sold from underneath me in case there is a huge jump. Instead, my plan is to keep selling 1 or 2 contracts at a time, waiting for the company to recover.
0.6291
FineWeb
```json [ "Strangle Option Strategy", "Selling Uncovered Calls", "Generating Option Income" ] ```
NASA astronomers announced today that they are certain that our galaxy is on an unavoidable collision course with the Andromeda Galaxy, the closest spiral galaxy to us. Don’t worry though, it won’t be happening for another 3.5 billion years or so. What’s interesting is that the collision will drastically change what our night sky looks like, and the astronomers released a series of photo illustrations showing what future astrophotographers will be shooting when they point their cameras at the heavens. Just for reference, here’s what the night sky currently looks like: In about 3.75 billion years, Andromeda will be close enough to us that it begins to dominate the night sky: As the galaxies collide, the sky will be filled with “star fireworks” — the formation of new stars: In 4 billion years, the two galaxies will begin warping one another, causing them to have a funhouse mirror look in the sky: At about 7 billion years, things will begin settling down. The two galaxies will form a new giant elliptical galaxy, with a single bright core filling up our nighttime sky:
0.9702
FineWeb
```json [ "Galaxy Collision", "Night Sky Changes", "Galactic Merger" ] ```
SOC 203 Gender & Society (3 credits) This course offers a sociological analysis of sex and gender relations in contemporary societies. One of the primary objectives of this course is to give students a foundation in sociological ways of thinking about gender in their everyday lives. We will consider how schools, the media, workplaces, families and other social institutions produce, maintain, and enforce gendered behavior and inequality. Further, we will examine the social construction of gender, cross-cultural variations in gender and the intersection of gender with race, class, ethnicity, and sexual orientation. Prerequisite: English and reading placement at college level or permission of instructor.
0.7837
FineWeb
["Gender Relations", "Sociological Analysis", "Social Institutions"]
If a 4-year-old passes the marshmallow test—resisting the urge to immediately eat a marshmallow for the promise of a second one later—then they are destined for academic success. At least that’s what many parents think, based on a famous psychology study published in 1990. But a new version of the study, which looked at a larger and more diverse group of children, is calling the original findings into question, according to The Atlantic. Researchers found that the marshmallow test wasn’t very useful in predicting a child’s future academic success at age 15, once social and economic background were taken into account. The findings, reported in Psychological Science, hold an important lesson for parents: Don’t fret if your child fails the marshmallow test.
0.8043
FineWeb
["Psychology Study", "Academic Success", "Child Development"]
Last year's update from the Intergovernmental Panel on Climate Change identified biomass-fired power plants that capture their carbon—and thus sequester atmospheric CO2—as one of the most critical tools available for stabilizing climate change by the end of this century. Last week, researchers at the University of California at Berkeley reported that carbon-capturing bio-power plants could go two steps further, rendering the entire Western North American power grid carbon-negative by 2050. The idea behind bioenergy with carbon capture and storage, or BECCS, is to capture carbon emissions from a combustion power plant's effluent using the same equipment and methods employed by a few CCS-equipped coal-fired power plants. Once such plant, which started up in September in Saskatchewan, is the world's first commercial-scale coal power plant to capture over 90 percent of its carbon. But whereas power plants that capture and sequester fossilized carbon can, at best, achieve carbon-neutral performance, BECCS can be carbon-negative. That's because the carbon in the wood and other biofuels they burn was sucked from the atmosphere as the plants grew. Storing that atmospheric carbon underground is tantamount to generating electricity while actually doing Earth's climate a favor. Last week's report, in the journal Nature Climate Change, purports to be the first detailed simulation of how BECCS would play out in a particular region. The research team, led by Daniel Kammen, director of the Renewable and Appropriate Energy Laboratory at UC Berkeley, simulated BECCS deployment on the Western power system (which interconnects most of the U.S. and Canada west of the Rockies, plus Mexico's Baja California). Their SWITCH-WECC model is a standard power grid model augmented with information about the location and cost of biomass fuel sources. After screening the sustainability of biomass resources available in the region from forestry, agriculture, and municipal wastes, the researchers identified enough biomass to meet between 7 and 9 percent of projected electricity demand for 2050. But they found that pushing BECCS to that level had an outsize impact on total power sector emissions. By combining BECCS with aggressive deployment of renewable energy and fossil-fuel emission reductions, they projected that grid-wide carbon emissions could be reduced by 145 percent in 2050 relative to 1990 levels. In that scenario, with BECCS providing carbon-negative baseload power to complement solar, wind, hydropower and other renewable installations, overall emissions from the Western N.A. grid come in at -135 megatons of CO2 per year. That's enough to offset all of the emissions from Alberta's unconventional oil drilling, twice over. No doubt critics will question the validity and relevance of Berkeley's findings, starting with the alleged carbon benefits. Many critics argue that bioenergy production leads to changes in land use—such as clearing of forests—that can generate large carbon releases and thus undercut the notion of negative emissions. Then there is the cost of capturing carbon from power plant emissions. The Saskatchewan coal plant's CCS equipment has been so pricey to install and operate that it may cost more per kilowatt-hour to run than the 12 cents that its operator, SaskPower, gets for selling the electricity it generates. In SaskPower's case, it pencils out because they can sell the captured CO2 to a nearby oil and gas operator, which uses it to stimulate oil production in the process of storing the CO2 underground. But the scale of BECCS contemplated by UC Berkeley's study is well beyond what oil markets will support. That means massive cost reductions must be achieved in the decades ahead. The third major question facing all future carbon capture and storage operations, whether they capture atmospheric or fossil CO2, is how securely the CO2 can be sequestered underground. Five years ago, one of the world’s largest CCS operations experienced large surface deforming, raising the spectre that rock layers expected to keep injected CO2 underground could fracture. No CO2 escaped from that remote Algerian site, but operators prematurely terminated CO2 injection there, and anxiety over CO2 leakage has paralyzed a number of CCS projects. According to the IPCC, these concerns are valid but, at least at present, none appear to be showstoppers. The international scientific body judges the challenge of stabilizing climate to be too large and important to eliminate BECCS from consideration. Berkeley's study is likely to strengthen that argument.
0.7742
FineWeb
```json [ "Bioenergy with Carbon Capture and Storage (BECCS)", "Carbon Emissions Reduction", "Climate Change Mitigation" ] ```
The SAT Prep Black Book The Two Critical Components of SAT Math Success Since the SAT Math section is all about basic math ideas presented in strange ways, there are two key areas of knowledge we’ll need to do well on the test: oBasic knowledge of arithmetic, geometry, and algebra (including some basic graph-related ideas) oA thorough understanding of the SAT’s unwritten rules, patterns, and quirks. So you will need some math knowledge, of course, but you won’t need anything like trig, stats, or calculus, and you won’t have to memorize tons of formulas. Like I keep saying (and will continue to say), it’s much more important to focus on how the test is designed than to try to memorize formulas. In a moment we’ll go through the “Math toolbox,” which is a list of math concepts that the SAT is allowed to incorporate when it makes up questions. After that, we’ll get into the SAT’s unwritten rules of math question design.
0.9734
FineWeb
["SAT Math Success", "Math Knowledge", "SAT Test Design"]
I created our first bin about a year ago when Luke was a young 3 and Lilah was 21 months. They loved burying things and digging in it mostly. Now that Luke is a young 4 and Lilah is 2-1/2, they really love the imaginative part of creating scenes and acting things out. |One of our current bins. The filler is rice. Toy additions are pirates from a Toob, a plastic palm tree, shells from the beach that double as scoops, sand dollars, and a fossilized shark’s tooth.| But, let me start at the beginning… What is a sensory bin? A sensory bin is a homemade creation designed to give your children fine motor practice while using their senses to experience different sights, sounds, and especially textures during play. It’s also great for the imagination and will likely keep them interested for decent periods of time. The bin is usually some sort of container with a lid. There is normally a movable filler (like dried rice or beans) and some sort of toys, often themed. The contents of the bin can be rotated or changed out to keep it interesting to the child. |L&L with their very first bin in May 2011.| Why make a sensory bin? They are loads of fun! Not only that, they’re educational, too. Kids can practice scooping, pouring, spooning, finding things that are hidden (buried), practice control keeping the materials in the bin, and use their imagination! Kids definitely use most of the senses when they play with a sensory bin! Sensory bins are also great to pull out when you need to occupy your children for 10-30 minutes. (You will probably need to provide at least a little supervision, but I find my 4 and 2-1/2 year olds can use this responsibly now with only minimal observation on my part, so I can prep dinner or do other in-the-general-vicinity tasks while they play). Other good reasons for making a sensory bin? They’re pretty easy to do, you probably have at least some parts for creating a bin already so it’s a fairly thrifty project, and your kids will most likely LOVE it — especially if you base it on their current interests! |Luke playing with one of our current bins.| Now, like I said, I’m no expert on sensory bins, but these are my notes and observations based on the year or so we’ve used our original bin, and on my creating of 2 additional bins in the last week or so: Notes about containers: - If you have something that will work already, I’d go with that versus buying something — at least until you find out if your child is interested. Even a bucket or a cardboard box lined with plastic wrap would probably be just fine. - If you have one or if you are going to buy a bin, I think a clear bin is best because it doesn’t block the light so the inside of the box stays bright. - Make sure the container is deep enough to hold an inch or two of filler at least — burying and digging in the filler is lots of fun, regardless of what theme you go with! - If you anticipate that keeping the filler in the box might be somewhat difficult, go with a container with higher sides. Older kids can probably be responsible with a container with lower sides. - Decide how many children will be playing at once and get a container sized accordingly. A single child could use a container as small as a shoe box, but that would probably be too small for two children… Notes about fillers: - The items you put into the box to “fill” it are likely to be choking hazards. Please supervise your child and make sure they are out of the I-like-to-put-everything-in-my-mouth-stage first! - Stress to your child that the filler stays IN the box. - My rule is that if I see filler on the floor, I ask the kids to clean it up. Sometimes I set a timer if a gentle reminder isn’t enough. “I’m setting the timer for 1 minute. If there’s still rice on the floor when you hear it beep, I’ll clean it up but the bin will be put away for now.” If it’s not cleaned up at the end of the time and I have to clean it up, the bin gets put away until the afternoon or for the rest of the day. - I have found that if I explained my expectations before play, supervised really well, and was strict on my rules for the first few times they played, that they pretty much were able to use the bins without any major issues from then on. - Occasionally they get too excited (mainly my 2-1/2 year old) and filler starts going everywhere, but a gentle reminder usually helps. Either that, or I realize it’s time to move on to a different activity. 😉 Okay, enough about rules and regulations! On to the fun stuff! Ideas for fillers: - *rice — plain or colored/dyed - *dry beans — a single kind or a mix - *bird seed - *dry grains - *shredded paper - *small blocks - *glass “stones” (smooth, round, sometimes used in fish tanks) - *real or fake leaves - *knights and castles - *sight words - *monster trucks - *ocean life - *based on a book - *based on a movie, video, or TV show - *outer space - *animal habitats - *doll house - *the possibilities really are endless! Ideas for toys/additions to the box: - *plastic figurines (we really like Toobs; also try checking Dollar Tree) - *kitchen items: measuring cups or spoons, spoons, funnels, jars with lids, colanders, chop sticks - *play kitchen items: bowls, plates, pots, pans, cups, tea pot - *plastic trees or scenery - *nature items you collect: shells (big ones make good scoops), sand dollars, sharks teeth, rocks, fossils, leaves, gum balls, pine cones, coral - *paper print outs (laminated or on card stock) — book characters, sight words, letters of the alphabet, animals, etc. - *foam letters or numbers - *Hot Wheels or other small cars - *dollhouse furniture and people This post may contain affiliate links.
0.9687
FineWeb
``` { "topics": [ "Introduction to Sensory Bins", "Creating a Sensory Bin", "Ideas for Sensory Bin Fillers and Toys" ] } ```
The general assumption is that increasing coverage will automatically reinforce the right of employer and employee representatives to negotiate without government interference. Yet since this argument ignores the negative freedom of association – the right not to associate – it makes neither legal nor economic sense. It is competition from outsiders that ensures the appropriateness of a collective agreement, making the positive and negative freedoms of association equally necessary. Moreover, it is impossible to determine the optimal degree of collective bargaining coverage either theoretically or empirically. Although empirical research has identified various characteristics that influence the likelihood of an establishment choosing to be bound by collective agreements, these are mainly structural factors such as how a company is organised or the extent of its export business. The actual motives have not yet been sufficiently investigated. Policymakers would therefore be well advised to refrain from interfering with the autonomy of collective bargaining, and particularly from attempts to force companies into collective agreements. At the same time, the two sides of industry must face up to the need to expand their memberships and convince their members of the benefits of the German collective bargaining system. Expanding collective bargaining coverage Declining collective bargaining coverage in Germany has prompted a debate on how the trend could be reversed. However, this debate makes no distinction between the coverage and the autonomy of collective bargaining. - Hagen Lesch / Sandra Vogel / Hannah Busshoff / Adam Giza · - IW-Analyse · - 19 Dec 2017
0.7737
FineWeb
```json [ "Collective Bargaining Coverage", "Autonomy of Collective Bargaining", "Expanding Collective Bargaining Membership" ] ```
From Bulbapedia, the community-driven Pokémon encyclopedia. Damage modification is a system of multipliers used to change the damage dealt in a battle. These modifiers affect how the damage is calculated by multiplying the move's base power in various ways. These modifiers range from doubling to halving and even negating the damage done. Since each move has an elemental type, its effectiveness is largely determined by the targeted Pokémon's type. Moves can have regular effectiveness or be super effective, not very effective, or not effective at all. The effectiveness of types against one another has varied among generations, see this page for its Generation I iteration and this page for the current model. - "Super effective" redirects here. For the webcomic, see Super Effective (webcomic). When a move is super effective, it inflicts double the damage it would normally do. For example, a move like Megahorn used against a Template:Type2 Pokémon will be super effective because Template:Type2 moves are super effective against Psychic-types. This effect can stack up; for example, a Pokémon that is Rock/Ground and is hit by a Template:Type2 move like Surf will be damaged four times as much by the move as a Template:Type2 Pokémon would be, because both Rock and Ground Pokémon are weak to Water moves. Not very effective When a move is not very effective, it deals half of the damage it would normally do. This works exactly like super effective moves, and can stack up. For example, a Steel/Rock Pokémon will be damaged by a quarter of the amount by a Template:Type2 move. Likewise, a Normal/Flying Pokémon hit by a Template:Type2 move will be damaged normally, as the super-effectiveness of Fighting against Normal is canceled out by the not-very-effectiveness of Fighting against Flying. When a move is not effective, it does not damage the Pokémon at all and that Pokémon is "immune" to that type. This immunity includes status moves of that type. For example, the Template:Type2 move Thunderbolt will have no affect on a Template:Type2 Pokémon such as Dugtrio. Likewise, using the move Thunder Wave will prove useless and yields a "It does not affect Dugtrio" (whereas a Confuse Ray that simply missed yields "It did not affect Dugtrio"). Certain types are immune to specific moves, such as Template:Type2 Pokémon being immune to Leech Seed. A move will sometimes inflict a critical hit against another Pokémon. This will double the damage done after type effectiveness calculations occur, leading to the possibility of a doubly-super effective move that gets a critical hit to do eight times as much damage as normal. Pokémon with the Sniper ability will triple the damage done during a critical hit rather than double it. Regular moves have a 6.25% chance of scoring a critical hit, but some moves have increased critical hit rates. The chance of causing a critical hit can be modified by using moves or items that increase the critical hit level. Scope Lens, Razor Claw, and Super Luck add 1 level each, the Stick adds 2 levels (for Farfetch'd only), the Lucky Punch adds 2 levels (for Chansey only), and Focus Energy adds 2 levels to any Pokémon. 50% is the maximum critical hit ratio, since no situation exceeds it. When a move scores a critical hit, damage is calculated based on either the modified or unmodified attack and defense stats, whichever is more advantageous to the attacker. For example, an attacker who has used Swords Dance and then makes a critical hit will get the benefit of the increased Attack, while one affected by Charm will use its normal Attack score. Another example would be if a team had Light Screen or Reflect on it and the attacker scored a critical on the team with the certain wall; the defense added by the wall would then be ignored, and the attacker would do double damage. In Generation I, critical hits always ignore stat modifiers, even if this is disadvantageous to the attacker. Same-type attack bonus - Main article: Same-type attack bonus A move used by a Pokémon that is of the same type as the move itself will do 150% of its normal damage, such as a Template:Type2 move used by a Fire-type Pokémon. Dual-typed Pokémon will receive this bonus for both of their types, and a Pokémon whose type can change in-battle will receive the bonus for whatever type they are when they make the move. - Main article: Weather conditions Template:Type2 and Template:Type2 moves' damage will be modified by rain and sunshine. If Rain Dance causes a downpour, Water-type moves will do 50% more damage and Fire-type moves will do 50% less damage. If Sunny Day causes harsh sunlight, Fire-type moves will do 50% more damage and Water-type moves will do 50% less damage. - Main article: Ability Abilities come in a wide variety, and have various effects on damage modification. These affects range from increasing and decreasing the power of moves of a specific type, granting immunities, and even changing the power of moves affected by other damage modifiers. These abilities include: - Blaze, Overgrow, Torrent and Swarm increase the power of Template:Type2, Template:Type2, Template:Type2 and Template:Type2 moves by 50% when the user's HP falls below 33%, respectively. - Thick Fat decreases the damage dealt to the user by opponents' Fire-type and Ice-type moves by 50%. Heatproof decreases the damage dealt to the user by opponents' Fire-type moves by 50%. - Levitate provides immunity to Template:Type2 moves, and Soundproof provides immunity to sound-based moves. - Volt Absorb and Water Absorb grant immunity to Template:Type2 and Template:Type2 moves respectively, healing the user's HP by 25% instead. - Dry Skin works in part like Water Absorb, healing the user when hit with a Template:Type2 move, but it also heals the user by 6.25% each turn during rain. However, it also causes the Pokémon to receive additional damage from Template:Type2 attacks and receive 6.25% (of its HP stat) damage during sunshine. - Flash Fire negates the effect of opponents' Fire-type moves and instead increases the power of the ability's user's own Fire-type attacks. Similarly, Motor Drive negates the effect of opponents' Electric-type moves and instead causes the ability's user's Speed to increase. These abilities include status moves like Will-O-Wisp and Thunder Wave. - Mold Breaker removes all ability-based immunities to the user's moves. - Scrappy removes Template:Type2 Pokémon's immunity to the Template:Type2 and Template:Type2 moves of the user. - Drizzle, Drought, Sand Stream and Snow Warning cause permanent rain, sunshine, sandstorm, and hail respectively. - Adaptability increases the STAB multiplier for moves of the same type as the user from 1.5x to 2x. - Hustle boosts the Atk stat of the user by 50%. - Pure Power and Huge Power both double the user's Atk stat. - Technician increases the power of the user's moves with base power 60 or less by 50%. - Super Luck increases the user's critical hit ratio. - Battle Armor and Shell Armor prevent critical hits being dealt to the user. - Sniper increases the critical hit damage multiplier from 2x to 3x. - Tinted Lens doubles the power of the user's moves that are not very effective. - Solid Rock and Filter each reduce the power of super effective moves on the ability's user by one third. - Wonder Guard reduces the power of the opponent's damaging attacks that are not super effective to zero. - See also: Type-enhancing item Some held items increase the power of an attack by a small percentage. Most of these items boost attacks only of a certain type by 20%. Some items can also modify damage by boosting the attacker's offensive stats or the victim's defensive stats. Some berries also weaken an opponent's super effective move. Other held items that modify damage include: A number of other variables can affect the amount of damage caused by an attack. - If the attacker is burned and their ability is not Guts, their physical damage will be decreased by half. - In a double battle, moves that hit multiple targets do 75% of the damage they do in one-on-one battles. - Reflect and Light Screen decrease opponents' physical and special attacks, respectively, by 50% in a one-on-one battle and 33% in a double battle. Every time a regular attack is executed, the actual damage caused is adjusted by a random multiplier between 0.85× and 1.00×. The probability of each percentage is nonuniform due to integer division rounding, causing an approximately 6% chance for each point between 0.85 and 0.99, but only a 2.5% chance for the full damage.
0.9403
FineWeb
```json [ "Damage Modification", "Type Effectiveness", "Critical Hits" ] ```
A recent study conducted by Canadian scientists Professor Christopher Shaw and Dr. Lucija Tomljenovic revealed that the more vaccines that children receive containing the adjuvant aluminum, the greater their chance is of developing autism, autoimmune diseases and neurological problems in the future. In 2013, in their paper, published by Springer Science+Business Media, titled Aluminum in the Central Nervous System: Toxicity in Humans and Animals, Vaccine Adjuvants, and Autoimmunity, they revealed that during a 17-year period, the rates of autism had increased significantly in countries that had the most vaccinations containing the adjuvant aluminum. A Highly Significant Correlation The researchers compared the number of vaccines recommend by the Centers for Disease Control and Prevention (CDC) during the period from 1991 – 2008 and the changes in the autism rates during the same period. They wrote: “The data sets, graphed against each other, show a pronounced and statistically highly significant correlation between the number vaccines with aluminum and the changes in autism rates. Further data showed that a significant correlation exists between the amounts of aluminum given to preschool children and the current rates of autism in seven Western countries. Those countries with the highest level of aluminum-adjuvanted vaccines had the highest autism rates.” (own emphasis) They revealed that: “The observed correlation between the number of aluminum-adjuvanted vaccines and ASD was further tested using Hill’s criteria and met eight of nine of these indicating that vaccines containing aluminum are highly likely to be at least partially causal for autism.” For those who are not familiar with ‘Hill’s criteria,’ it is a technique used to determine a causal link between a specific factor and a disease. For example, does excess smoking cause lung cancer? Scientists seeking ‘to establish a valid causal connection between a potential disease agent’ now frequently use the technique, which was first developed by British medical statistician Austin Bradford Hill. Professor Shaw and Dr. Tomljenovic continued their paper by adding that: “There are other links between aluminum exposure/toxicity and ASD. These include the following: A pilot study showed higher than normal aluminum levels in the hair, blood and/or urine of autistic children; children are regularly exposed to higher levels of aluminum in vaccines per body weight than adults; practically, nothing is known about the pharmacokinetics and toxicodynamics of aluminum in vaccines in children; and aluminum in vaccines has been linked to serious neurological impairments, chronic fatigue and autoimmunity.” If Professor Shaw and Dr. Tomljenovic are correct, then their results are extremely worrying, especially as autism is not the only condition to which their paper linked the adjuvant aluminum. In fact, their paper also linked aluminum to the rise in the incidence of Alzheimer’s, Gulf War syndrome and a relatively new syndrome, ASIA (autoimmune/inflammatory syndrome induced by adjuvants) that was identified by scientists last year. Toxic Aluminum Levels in Vaccines In 2012, a paper written by Stephanie Seneff, Robert M. Davidson and Jingjing Liu, titled Empirical Data Confirm Autism Symptoms Related to Aluminum and Acetaminophen Exposure, also confirmed that exposure to a large number of vaccinations containing the adjuvant aluminum at a young age was the most likely cause for the increase in autism and other adverse reactions to vaccines. They wrote: “In this paper, we have presented some analyses of the VAERS database which strongly suggest that the aluminum in vaccines is toxic to vulnerable children. While we have not shown that aluminum is directly causative in autism, the compelling evidence available from the literature on the toxicity of aluminum, combined with the evidence we present for severe adverse reactions occurring much more frequently following administration of aluminum-containing vaccines as compared to non-aluminum containing vaccines, suggests that neuronal damage due to aluminum penetration into the nervous system may be a significant factor in autism. The fact that mentions of autism rose steadily concomitant with significant increases in the aluminum burden in vaccines, is highly suggestive.” This is particularly bad news to any parent considering having their children vaccinated, as a growing number of the childhood vaccinations now contain the adjuvant aluminum as an ingredient. This was explained in depth by pediatrician Robert Sears, in his excellent article published in the magazine Mothering in 2008. Dr. Sears is another professional exceptionally worried about the effects of aluminum on children’s health. In an article warning mothers about the dangers of vaccinations containing the adjuvant, titled Is Aluminum The New Thimerosal? Dr. Sears explained that aluminum is added to vaccinations to help them work more efficiently. He stated that although this would not normally be a problem because aluminum is a naturally occurring element found everywhere in our environment, including our food, water, air and soil, he had become worried about the effects that aluminum was having on children’s health. He began to wonder if anyone had ever actually tested the safe level of injected aluminum. During his research, he came across a number of extremely worrying documents. However, few were as worrying as the one written by the American Society for Parenteral and Enteral Nutrition (ASPEN). Describing the document in depth, Sears wrote: “The source of the daily limit of 4 to 5 mcg of aluminum per kilogram of body weight quoted by the ASPEN statement seems to be a study that compared the neurologic development of about 100 premature babies who were fed a standard IV solution that contained aluminum, with the development of 100 premature babies who were fed the same solution with almost all aluminum filtered out. The study was prompted by a number of established facts: that injected aluminum can build up to toxic levels in the bloodstream, bones, and brain; that preemies have decreased kidney function and thus a higher risk of toxicity; that an autopsy performed on one preemie whose sudden death was otherwise unexplained revealed high aluminum concentrations in the brain; and that aluminum toxicity can cause progressive dementia.” He continued by giving some extremely alarming facts, of which few parents are aware: “However, none of these documents or studies mentions vaccines; they look only at IV solutions and injectable medications. Nor does the FDA require labels on vaccines warning about the dangers of aluminum toxicity, although such labels are required for all other injectable medications. All of these studies and label warnings seem to apply mainly to premature babies and kidney patients. What about larger, full-term babies with healthy kidneys?” “However, these documents don’t tell us what the maximum safe dose would be for a healthy baby or child, and I can’t find such information anywhere. This is probably why the ASPEN group suggests, and the FDA requires, that all injectable solutions be limited to 25 mcg; we at least know that that level is safe.” If this is so, then why do the recommended childhood vaccinations include far above the recommend amounts? According to Dr. Sears, the levels of aluminum included in childhood vaccinations are as follows: - DTaP (diphtheria, tetanus, and pertussis): 170–625 mcg, depending on - Hepatitis A: 250 mcg - Hepatitis B: 250 mcg - Hib (for meningitis; PedVaxHib brand only): 225 mcg - HPV: 225 mcg - Pediarix (DTaP–hepatitis B–polio combination): 850 mcg - Pentacel (DTaP–Hib–polio combination): 330 mcg - Pneumococcus: 125 mcg (emphasis added) You do not have to be medically qualified to understand that these levels far exceed the safe levels recommended by ASPEN, especially when you consider that a newborn baby is vaccinated with the hepatitis B vaccine, containing 250 mcg of aluminum, at birth! In fact, according to Dr. Sears, the FDA stated that: “Although aluminum toxicity is not commonly detected clinically, it can be serious in selected patient populations, such as neonates (newborns), and may be more common than is recognized.” (emphasis added) If this is true, then why are all newborn infants, including those born prematurely, vaccinated at birth with the vaccine against hepatitis B, which is loaded with more than the recommend safe levels of aluminum? Additional Concerns for Preemies It is a recognized fact that many babies are born prematurely. A baby can now survive outside of their mother’s womb as young as 24 weeks gestation. This means that many extremely premature babies are being vaccinated with massive amounts of aluminum on the day they are born. If this is not bad enough, at the tender age of eight weeks, in line with the CDC recommended childhood vaccination schedules, these tiny, immature babies are vaccinated with as many as nine vaccinations in one day. For a baby born at 24 weeks, this means that they are still playing a game of catch-up when they are vaccinated because they are “minus eight weeks” and not “plus eight weeks” at the time of vaccination. In fact, many of them will be taken out of their incubator to be legally vaccinated by medical professionals with vaccinations that could potentially kill them! Aluminum, as these papers have demonstrated, is extremely toxic, especially when children are repeatedly vaccinated with vaccines containing aluminum over the recommended limits. The FDA and ASPEN recommend 25 mcg to be a safe limit and yet, as Dr. Sears has shown in his article, many of the childhood vaccinations contain aluminum far in excess of this amount. The papers that I have studied make it abundantly clear that the more vaccines that children receive containing aluminum, the greater chance they have of developing autism, autoimmune diseases, Alzheimer’s disease and neurological deficits in the future. As parents, it is our duty to protect our children at all costs, and yet how many of us know what the recommended vaccinations contain? Parents need to research more thoroughly the ingredients included in the vaccinations with which they are planning to have their child vaccinated and the possible effects that these ingredients may have on their child’s health. Autism, for the majority of children, is for life and it can affect every aspect of their development. Surely, we owe it to our children to at least be fully aware of the possible dangers of vaccinations, before subjecting our children to as many as 39 vaccines between the ages of zero to six which are known to be loaded with aluminum, mercury, formaldehyde and many other potentially lethal ingredients. Please read the papers referenced below before vaccinating your child, or download a free list of vaccine ingredients.
0.8639
FineWeb
``` { "topics": [ "Aluminum in Vaccines", "Vaccine Safety and Autism", "Toxicity of Aluminum Adjuvants" ] } ```
The Details Matter: A Case Study Accurate risk assessment of large industrial facilities is highly sensitive to the information model users enter about facility composition. It is also important to account for critical mitigation efforts, like the anchoring of components. Stakeholders, like commercial underwriters, can obtain facility-specific information by inspecting the sites and noting the components and purpose of the facility. They can then translate this information into the most appropriate occupancy code in AIR’s 400-series. Note that these codes reflect differences in vulnerability by region. For example, energy facilities along the U.S. Gulf Coast typically exhibit mitigation efforts, like raised control panels, more consistently than those in other parts of the United States. AIR’s 400-series damage functions capture this variation. To demonstrate the utility of the 400-series, the following example shows the differences in loss that can result from coding such exposure with the 300-series instead. A power plant located in the United States produces electricity by burning fossil fuels, such as coal or oil. It is less than five miles from several active faults, and thus at risk from hazards including earthquake-induced ground shaking, liquefaction, and tsunami. Some components, like conveyor systems, piping, and flares, would be relatively vulnerable to moderate earthquake-induced ground shaking, while others, like open frame structures and turbines, are not. If the 300-series were used instead, the best matched occupancy code would be electric (code 361), which would be appropriate for a small electrical station. In this modeling scenario, the thermo-electric facility’s total insured value—including the many structures and the diverse components—would be treated as buildings and/or contents instead of as uniquely vulnerable assets. AIR examined the impact on the modeled average annual loss (AAL) of using the 300-series versus the 400-series. Results show that the average annual loss for the site (with a USD 230 million replacement value) decreases by nearly 37% when using the 477 code (Table 1). The 100-year return period loss decreases by 44%. Note that using the 400 series does not always result in lower loss values, but rather a more accurate estimation of the risk. Table 1. Key metrics from a 300-series versus 400-series exposure mapping comparison. For this particular example, the average annual loss decreases by nearly 37% when using 477 (thermo-electric) versus 361 (electric), and the 100-year return period loss decreases by 44%. (Source: AIR) |AIR Occupancy Code ||Average Annual Loss |1% Exceedance Probability Loss (USD) There are significant advantages to using the 400-series from a risk management and mitigation standpoint. Commercial underwriters can take full advantage of the detailed risk information they collect when inspecting the facilities they underwrite. They can be sure the information they have meticulously gathered is assigned the facility code that best represents the relevant component-level vulnerabilities. The process of translating the information about a facility’s constituent components to a particular facility code also helps commercial underwriters better visualize and understand their exposure data. Portfolio managers can also benefit from using the 400-series. These executives are tasked with pursuing business objectives while balancing risk and return, decision-making that requires them to be highly informed. They must closely evaluate how the diverse set of facility types they manage fits in with their entire portfolio. The increased accuracy achieved with the 400-series can help them better understand when potential losses to a single risk warrant pursuit of mitigation options—either through insurance purchase or physical improvements to critical components. By enhancing portfolio managers’ risk views through comprehensive results, AIR’s 400-series ensures that they are in the best position to make fiscally responsible risk management decisions. Brokers, meanwhile, can use results from the 400-series analysis to enhance their roles as trusted advisors. Informed decisions are an integral component of any sound risk management program, and the 400-series provides a more accurate and objective assessment of vulnerability for stakeholders throughout the risk management chain. AIR recommends that companies with portfolios of high-value industrial facilities investigate their key loss-driving risks and use the 400-series to assess potential damage and loss. AIR’s detailed modeling capability for these facilities is currently available for the United States, Japan, India, the Caribbean, and Australia and will soon be available for additional models. Before the introduction of AIR’s 400-series, commercial insurers that wanted to assess the potential damage and loss to large industrial facilities were not able to derive the maximum benefit from catastrophe models. Critical information about the default breakdown of components was nowhere to be found, and as a result, there was high uncertainty in the estimates of damage and loss to complex, high-value industrial complexes. Since 2009, however, that has changed. AIR’s industry-leading efforts to increase accuracy of catastrophe loss assessments have extended to the commercial space. Users can leverage AIR’s detailed modeling software to represent large and complex industrial facilities and to achieve more accurate loss estimates, which in turn help make better risk transfer decisions. 1At that time, this capability was just available for the AIR Model for U.S. Earthquake. 2 These damage functions are available in the hurricane and earthquake models for the U.S., the earthquake and typhoon models for Japan, the earthquake and tropical cyclone models for Australia, and the tropical cyclone models for the Caribbean and India.
0.8039
FineWeb
["Risk Assessment", "Industrial Facilities", "Catastrophe Modeling"]
Xander - Book #1 in a new Trilogy by Ravel Lopez. The story follows Xander, a troubled young boy raised by a deranged father, as he grows to be a man, then rises to power in a struggling colony on a desolate planet. Set thousands of years in the future, after the Earth set out to colonize new planets in desperation A pair of twin planets were discovered, both in equal but opposite orbits around a medium sized star. Both had Earth like gravity, but one flourished under terraforming, the other wouldn't. The rich planet became the metropolized center, the other became a wasteland occupied by outlaws and outcasts. For a gruesome crime he commits, Xander gets sent to a penal colony on that planet and unites the downtrodden workers, becoming a hero and leader, but at what costs? This book was written as part of NanoWrimo 2012 and is the first in a trilogy set on these two strange planets.
0.5437
FineWeb
["Xander Book Series", "Planet Colonization", "Science Fiction Trilogy"]
Dogs and cats need their teeth cleaned just like we do. Dental cleanings are a great way to ensure that your pet has a happy and healthy mouth. Not taking care of your pets teeth can lead to serious problems such as heart conditions, renal problems, liver problems, gum disease, and many more. Below is a picture of “Hunter.” He is a 13 year old poodle mix that had not had a dental cleaning in many years. “Hunter” before his dental cleaning. Tartar can form like rocks on the teeth of dogs and cats. There are many different types of bacteria in the mouth of your pet and letting the teeth go uncleaned can cause the increase in these bacteria being constantly swallowed by your pet. This leads to the many health problems listed above. “Hunter” after his dental cleaning. These pictures show the great difference and the importance of having your pet’s teeth cleaned on a regular basis. Call us today to have your pet’s teeth cleaned. Non-Anesthetic Dental Cleanings Some pets may be a candidate for a Non-Anesthetic Dental cleaning. This painless procedure means that your pet does not have to go under anesthesia and offers pet owners an alternative method for cleaning and maintaining your pet’s oral health. Please call today to see if your pet is a candidate for this method of dental cleaning. Brush Your Pet’s Teeth at Home Can you imagine how your mouth, teeth, and gums would feel if you did not brush your teeth daily? We brush our teeth and the same should go for our pets. After a dental cleaning, pets should have their teeth brushed 3-5 times a week. Less than five minutes per brushing can save your pet from sickness and pain in the future. Please remember to only use pet friendly toothpaste on your pet.
0.5776
FineWeb
["Pet Dental Cleanings", "Non-Anesthetic Dental Cleanings", "At-Home Pet Dental Care"]
[Numpy-discussion] Broadcasting and indexing Thu Jan 21 10:37:09 CST 2010 I'm trying to understand how array broadcasting can be used for indexing. In the following, I use the term 'row' to refer to the first dimension of a 2D array, and 'column' to the second, just because that's how numpy prints them out. If I consider the following example: >>> a = np.random.random((4,5)) >>> b = np.random.random((5,)) >>> a + b array([[ 1.45499556, 0.60633959, 0.48236157, 1.55357393, 1.4339261 ], [ 1.28614593, 1.11265001, 0.63308615, 1.28904227, 1.34070499], [ 1.26988279, 0.84683018, 0.98959466, 0.76388223, 0.79273084], [ 1.27859505, 0.9721984 , 1.02725009, 1.38852061, 1.56065028]]) I understand how this works, because it works as expected as described in So b gets broadcast to shape (1,5), then because the first dimension is 1, the operation is applied to all rows. Now I am trying to apply this to array indexing. So for example, I want to set specific columns, indicated by a boolean array, to zero, but the following fails: >>> c = np.array([1,0,1,0,1], dtype=bool) >>> a[c] = 0 Traceback (most recent call last): File "<stdin>", line 1, in <module> IndexError: index (4) out of range (0<=index<3) in dimension 0 However, if I try reducing the size of c to 4, then it works, and sets rows, not columns, equal to zero >>> c = np.array([1,0,1,0], dtype=bool) >>> a[c] = 0 array([[ 0. , 0. , 0. , 0. , 0. ], [ 0.41526315, 0.7425491 , 0.39872546, 0.56141914, 0.69795153], [ 0. , 0. , 0. , 0. , 0. ], [ 0.40771227, 0.60209749, 0.7928894 , 0.66089748, 0.91789682]]) But I would have thought that the indexing array would have been broadcast in the same way as for a sum, i.e. c would be broadcast to have dimensions (1,5) and then would have been able to set certain columns in all rows to zero. Why is it that for indexing, the broadcasting seems to happen in a different way than when performing operations like additions or multiplications? For background info, I'm trying to write a routine which performs a set of operations on an n-d array, where n is not known in advance, with a 1D array, so I can use broadcasting rules for most operations without knowing the dimensionality of the n-d array, but now that I need to perform indexing, and the convention seems to change, this is a real issue. Thanks in advance for any advice, More information about the NumPy-Discussion
0.8419
FineWeb
["NumPy Broadcasting", "Array Indexing", "Multidimensional Arrays"]
Melio delivers key enterprise storage technology and enables next generation distributed platforms to deliver unprecedented application availability and scale-out in a workload and infrastructure agnostic offering. Our platform integrates advanced storage and application concepts, including clustered file system, volume management, virtual storage capability (SoftSAN), rapid resource provisioning, Quality of Service (QoS), and high availability layer into a workload, and infrastructure agnostic offering. Unique benefits derived from the Sanbolic Melio Platform include: - Multi-workload, multi-hypervisor and multi-site support for maximum flexibility - Increased application availability through active-active access to critical data, independent of the underlying storage hardware - Ability to dynamically provision volumes, data-protection levels, and QoS to ensure consistent application performance and align with user-defined policies - Centralized management of resources to improve utilization and reduce storage-related costs and complexity
0.5358
FineWeb
```json [ "Storage Technology", "Application Availability", " Distributed Platforms" ] ```
Drive Medical chrome knurled 18 inches grab bar - 1 ea, 12118 - Sinus and Cold Natural Lemon Flavor. - Acetaminophen - Pain Reliever/Fever Reducer. - Pheniramine Maleate - Antihistamine. - Phenylephrine HCl - Nasal Decongestant. TheraFlu sinus and cold liquid powder temporarily relieves these symptoms due to a cold: nasal congestion, sinus congestion and pressure, minor aches and pains, headache, minor sore throat pain. Temporarily restores freer breathing through the nose. Helps clear nasal passages; shrinks swollen membranes. Temporarily reduces fever. TheraFlu temporarily relieves these symptoms due to hay fever or other upper respiratory allergies: Runny nose, sneezing, itchy nose and throat, itchy watery eyes. Shipping Weight : 1.70 lb
0.7827
FineWeb
["Medications", "Symptom Relief", "Product Details"]
You must have Credits on your Balance to download this sample Object Database Management System Pages 9 (2259 words) A database is a set of records stored in a computer in an organized way, so that a computer program is able to consult it to answer questions. For better recovery and sorting, each record is generally organized as a set of data elements (facts). The objects retrieved in reply to queries become information that can be used to make decisions… There are two most important factors that lead users to implement object database technology. Firstly, relational databases turn out to be cumbersome to use with composite data. Secondly, data is usually operated by application software written using object-oriented programming languages such as C++, Java, Delphi and C#, and the code required translating between this demonstration of the data and the tuples of a relational database can be dreary to write, and prolonged to execute. This variance between the models used to represent information in the application programs and the database is sometimes referred to as an impedance variance. Now a day Client-Server applications that depends on a database on the server as a data store while servicing requests from multiple clients are quite ordinary. The majority of these applications use a Relational Database Management System (RDBMS) because their data store while using an object oriented programming language for development. The "impedance mismatch" caused as a result of having to map objects to tables and vice versa has long been accepted as an essential performance penalty. Object database management systems added the notion of persistence to object programming languages. ... Not exactly what you need?
0.9198
FineWeb
["Database Management", "Object Database Technology", "Relational Database Management System"]
Poland, officially the Republic of Poland, is a European country, bordered by Germany to the west; the Czech Republic and Slovakia to the south; Ukraine, Belarus to the east; and the Baltic Sea and Kaliningrad Oblast (a Russian exclave) and Lithuania to the north. The total area of Poland is , making it the 69th largest country in the world and the 9th largest in Europe. With a population of over 38.5 million people, Poland is the 34th most populous country in the world, the sixth most populous member of the European Union, and the most populous post-communist member of the European Union. Poland is a unitary state divided into 16 administrative subdivisions. Many historians trace the establishment of a Polish state to 966, when Mieszko I, ruler of a territory roughly coextensive with that of present-day Poland, converted to Christianity. The Kingdom of Poland was founded in 1025, and in 1569 it cemented a longstanding political association with the Grand Duchy of Lithuania by signing the Union of Lublin, forming the Polish–Lithuanian Commonwealth. The Commonwealth gradually ceased to exist in the years 1772-1795, when the Polish territory was partitioned among Germany, the Russian Empire, and Austria. Poland regained its independence (as the Second Polish Republic) at the end of World War I, in 1918. Two decades later, in September 1939, World War II started with the invasions of Poland by Nazi Germany and the Soviet Union (as part of the Molotov–Ribbentrop Pact). More than six million Polish citizens died in the war. In 1944, the Polish People's Republic was proclaimed and after a brief period of conflict, falsified referendum and elections Poland became a satellite state of the Soviet Union in 1947, and was accordingly renamed the People's Republic of Poland in 1952. During the Revolutions of 1989, Poland's communist government was overthrown and Poland adopted a new constitution establishing itself as a democracy and renaming itself the "Third Polish Republic". Despite the vast destruction the country experienced during World War II, Poland managed to preserve much of its cultural wealth. There are currently 14 heritage sites inscribed on the UNESCO World Heritage list in Poland and 54 Historical Monuments. Since the end of the communist period, Poland has achieved a "very high" ranking in terms of human development. How places in Poland are organized Poland has a complicated history. From 1569-1795 it was part of the Polish-Lithuanian Commonwealth. We call the subdivisions during that time commonwealth voivodships. From 1815-1914 it was known as Congress Poland. We call the subdivisions during that time congress governorates. From 1918-1939 it was known as the Second Polish Republic. We call the subdivisions during that time second republic voivodships. From 1945-1989 it was known as the People's Republic of Poland. We call the subdivisions from 1945-1975 People's republic voivodships. The subdivisions of Poland from 1975-1998, and the ones that replaced them in 1999 are both called "modern voivodships". The Family History Library Catalog lists some places in Poland according to their second republic voivodships and others according to their people's republic voivodships. We follow their lead and title Polish place pages according to their second republic voivodship (preferred) or people's republic voivodship if known, with also-located-in links to the voivodships for other time periods if known. All places in Poland Further information on historical place organization in Poland Historians have postulated that throughout Late Antiquity, many distinct ethnic groups populated the regions of what is now known as Poland. The ethnicity and linguistic affiliation of these groups have been hotly debated; the time and route of the original settlement of Slavic peoples in these regions have been the particular subjects of much controversy. The most famous archeological find from the prehistory and protohistory of Poland is the Biskupin fortified settlement (now reconstructed as a museum), dating from the Lusatian culture of the early Iron Age, around 700 BC. Before adopting Christianity in 960 AD, the people of Poland believed in Svetovid, the Slavic god of war, fertility, and abundance. Many other Slavic nations had the same belief. Poland began to form into a recognizable unitary and territorial entity around the middle of the 10th century under the Piast dynasty. Poland's first historically documented ruler, Mieszko I, accepted baptism in 966 and adopted Catholicism as the new official religion of his subjects. The bulk of the population converted in the course of the next few centuries. In 1000, Boleslaw the Brave, continuing the policy of his father Mieszko, held a Congress of Gniezno and created the metropolis of Gniezno and the dioceses of Kraków, Kołobrzeg, Wrocław. As he writes in his chronicl of Gallus Anonymus, capitals Polish Piast dynasty during the reign of former Wrocław, Kraków and Sandomierz ("Boleslaus vero, in Wratislaw, et in Cracovia, et in Sandomir, sedes regni principales obtinuat"). In 1109, Bolesław III Wrymouth defeated the King of Germany Henry V in the Battle of Hundsfeld. In 1138, Poland fragmented into several smaller duchies when Bolesław III Wrymouth divided his lands among his sons. In 1226, Konrad I of Masovia, one of the regional Piast dukes, invited the Teutonic Knights to help him fight the Baltic Prussian pagans; a decision which would ultimately lead to centuries of warfare with the Knights. In the middle of 13th-century the Silesian branch of the Piast dynasty (Henry I the Bearded and Henry II the Pious, ruled 1238–1241) almost succeeded in uniting the Polish lands, but the Mongols devastated the country and won the Battle of Legnica where Duke Henry II the Pious died (1241). In 1320, after a number of earlier unsuccessful attempts by regional rulers at uniting the Polish dukedoms, Władysław I consolidated his power, took the throne and became the first King of a reunified Poland. His son, Casimir III (reigned 1333–1370), has a reputation as one of the greatest Polish kings, and gained wide recognition as a protector of trade. He extended his kingdom to 250% of its initial size. Casimir also extended royal protection to Jews. The education of Polish society was a goal of rulers as early as the 12th century, and Poland soon became one of the most educated countries in Europe. The library catalogue of the Cathedral Chapter of Kraków dating back to 1110 shows that in the early 12th-century Polish intellectuals had access to European literature. Casimir III realized that the nation needed a class of educated people, especially lawyers, who could codify the country's laws and administer the courts and offices. His efforts to found an institution of higher learning in Poland were finally rewarded when Pope Urban V granted him permission to open the University of Kraków. The Golden Liberty of the nobles began to develop under Casimir's rule, when in return for their military support, the king made serious concessions to the aristocrats, finally establishing their status as superior to that of the townsmen, and aiding their rise to power. When Casimir died in 1370 he left no legitimate male heir and, considering his other male descendants either too young or unsuitable, was laid to rest as the last of the nation's Piast rulers. Poland also became a magnet for migrants. Germans settled in the towns; the Jewish community began to settle and flourish in Poland during this era (see History of the Jews in Poland); the same applies in smaller number to Armenians. The Black Death which afflicted most parts of Europe from 1347 to 1351 affected Poland less severely. The rule of the Jagiellon dynasty spanned the late Middle Ages and early Modern Era of Polish history. Beginning with the Lithuanian Grand Duke Jogaila (Władysław II Jagiełło), the Jagiellon dynasty (1386–1572) formed the Polish–Lithuanian union. The partnership brought vast Lithuania-controlled Rus' areas into Poland's sphere of influence and proved beneficial for the Poles and Lithuanians, who coexisted and cooperated in one of the largest political entities in Europe for the next four centuries. In the Baltic Sea region Poland's struggle with the Teutonic Knights continued and included the Battle of Grunwald (1410), where a Polish-Lithuanian army inflicted a decisive defeat on the Teutonic Knights, both countries' main adversary, allowing Poland's and Lithuania's territorial expansion into the far north region of Livonia. In 1466, after the Thirteen Years' War, King Casimir IV Jagiellon gave royal consent to the milestone Peace of Thorn, which created the future Duchy of Prussia, a Polish vassal. The Jagiellons at one point also established dynastic control over the kingdoms of Bohemia (1471 onwards) and Hungary. In the south Poland confronted the Ottoman Empire and the Crimean Tatars (by whom they were attacked on 75 separate occasions between 1474 and 1569), and in the east helped Lithuania fight the Grand Duchy of Moscow. Some historians estimate that Crimean Tatar slave-raiding cost Poland one million of its population from 1494 to 1694. Poland was developing as a feudal state, with a predominantly agricultural economy and an increasingly powerful landed nobility. The Nihil novi act adopted by the Polish Sejm (parliament) in 1505, transferred most of the legislative power from the monarch to the Sejm, an event which marked the beginning of the period known as "Golden Liberty", when the state was ruled by the "free and equal" Polish nobility. Protestant Reformation movements made deep inroads into Polish Christianity, which resulted in the establishment of policies promoting religious tolerance, unique in Europe at that time. It is believed that this tolerance allowed the country to avoid the religious turmoil that spread over Europe during the late Middle Ages. The European Renaissance evoked in late Jagiellon Poland (kings Sigismund I the Old and Sigismund II Augustus) a sense of urgency in the need to promote a cultural awakening, and resultantly during this period Polish culture and the nation's economy flourished. In 1543 the Pole, Nicolaus Copernicus, an astronomer from Toruń, published his epochal works, De revolutionibus orbium coelestium (On the Revolutions of the Celestial Spheres), and thus became the first proponent of a predictive mathematical model confirming heliocentric theory which ultimately became the accepted basic model for the practice of modern astronomy. Another major figure associated with the era is classicist poet Jan Kochanowski. The 1569 Union of Lublin established the Polish–Lithuanian Commonwealth, a more closely unified federal state with an elective monarchy, but which was governed largely by the nobility, through a system of local assemblies with a central parliament. The establishment of the Commonwealth coincided with a period of great stability and prosperity in Poland, with the union soon thereafter becoming a great European power and a major cultural entity, occupying approximately one million square kilometers of central Europe, as well as an agent for the dissemination of 'Western culture' through Polonization in modern-day Ukraine, Belarus and Western Russia. Poland-Lithuania suffered from a number of dynastic crises during the reigns of the Vasa kings Sigismund III and Władysław IV and found itself engaged in major conflicts with Russia, Sweden and the Ottoman Empire, as well as a series of minor Cossack uprisings. From the middle of the 17th century, the nobles' democracy, suffering from internal disorder, gradually declined, thus leaving the once powerful Commonwealth vulnerable to foreign intervention. From 1648, the Cossack Khmelnytsky Uprising engulfed the south and east eventually leaving Ukraine divided, with the eastern part, lost by the Commonwealth, becoming a dependency of the Tsardom of Russia. This was soon followed by the 'Deluge', a Swedish invasion, which marched through the Polish heartlands and damaged Poland's population, culture and infrastructure. Famines and epidemics followed hostilities, and the population dropped from roughly 11 to 7 million. Sobieski's reign marked the end of the nation's golden-era. Soon, finding itself subjected to almost constant warfare and suffering enormous population losses as well as massive damage to its economy, the Commonwealth fell into decline. The government became ineffective as a result of large scale internal conflicts (e.g. Lubomirski's Rokosz against John II Casimir and rebellious confederations) and corrupted legislative processes. The nobility fell under the control of a handful of magnats, and this, compounded with two relatively weak kings of the Saxon Wettin dynasty, Augustus II and Augustus III, as well as the rise of Russia and Prussia after the Great Northern War only served to worsen the Commonwealth's plight. Despite this The Commonwealth-Saxony personal union gave rise to the emergence of the Commonwealth's first reform movement, and laid the foundations for the Polish Enlightenment. The Great Sejm convened by Stanisław August in 1788 successfully adopted the 3 May Constitution, the first set of modern supreme national laws in Europe. However, this document, accused by detractors of harbouring revolutionary sympathies, soon generated strong opposition from the Commonwealth's nobles and conservatives as well as from Catherine II, who, determined to prevent the rebirth of a strong Commonwealth set about planning the final dismemberment of the Polish-Lithuanian state. Russia was aided in achieving its goal when the Targowica Confederation, an organisation of Polish nobles, appealed to the Empress for help. In May 1792 Russian forces crossed the Commonwealth's frontier, thus beginning the Polish-Russian War. The defensive war fought by the Poles and Lithuanians ended prematurely when the King, convinced of the futility of resistance, capitulated and joined the Targowica Confederation. The Confederation then took over the government. Russia and Prussia, fearing the mere existence of a Polish state, arranged for, and subsequently in 1793 executed, the Second Partition of the Commonwealth, which left the country deprived of so much territory that it was practically incapable of independent existence. Eventually, in 1795, following the failed Kościuszko Uprising, the Commonwealth was partitioned one last time by all three of its more powerful neighbours, and with this, effectively ceased to exist. The Age of Partitions Poles rebelled several times against the partitioners, particularly near the end of the 18th century and the beginning of the 19th century. One of the most famous and successful attempts at securing renewed Polish independence took place in 1794, during the Kościuszko Uprising, at the Racławice where Tadeusz Kosciuszko, a popular and distinguished general who had served under Washington in America, led peasants and some Polish regulars into battle against numerically superior Russian forces. In 1807, Napoleon I of France recreated a Polish state, the Duchy of Warsaw, but after the Napoleonic Wars, Poland was again divided by the victorious Allies at the Congress of Vienna of 1815. The eastern part was ruled by the Russian tsar as a Congress Kingdom which possessed a very liberal constitution. However, the tsars soon reduced Polish freedoms, and Russia annexed the country in virtually all but name. Thus in the latter half of the 19th century, only Austrian-ruled Galicia, and particularly the Free City of Kraków, created good environment for free Polish cultural life to flourish. Throughout the period of the partitions, political and cultural repression of the Polish nation led to the organisation of a number of uprisings against the authorities of the occupying Russian, Prussian and Austrian governments. Notable among these are the November Uprising of 1830 and January Uprising of 1863, both of which were attempts to free Poland from the rule of tsarist Russia. The November uprising began on 29 November 1830 in Warsaw when, led by Lieutenant Piotr Wysocki, young non-commissioned officers at the Imperial Russian Army's military academy in that city revolted. They were soon joined by large segments of Polish society, and together forced Warsaw's Russian garrison to withdraw north of the city. Over the course of the next seven months, Polish forces successfully defeated the Russian armies of Field Marshal Hans Karl von Diebitsch and a number of other Russian commanders; however, finding themselves in a position unsupported by any other foreign powers, save distant France and the newborn United States, and with Prussia and Austria refusing to allow the import of military supplies through their territories, the Poles accepted that the uprising was doomed to failure. Upon the surrender of Warsaw to General Ivan Paskievich, many Polish troops, feeling they could not go on, withdrew into Germany and there laid down their arms. Poles would have to wait another 32 years for another opportunity to free their homeland. When in January 1863 a new Polish uprising against Russian rule began, it did so as a spontaneous protest by young Poles against conscription into the Imperial Russian Army. However, the insurrectionists, despite being joined by high-ranking Polish-Lithuanian officers and numerous politicians were still severely outnumbered and lacking in foreign support. They were forced to resort to guerrilla warfare tactics and ultimately failed to win any major military victories. Afterwards no major uprising was witnessed in the Russian controlled Congress Poland and Poles resorted instead to fostering economic and cultural self-improvement. Despite the political unrest experienced during the partitions, Poland did benefit from large scale industrialisation and modernisation programs, instituted by the occupying powers, which helped it develop into a more economically coherent and viable entity. This was particularly true in the Greater Poland, Pomerania and Warmia annexed by Prussia (later becoming a part of the German Empire); an area which eventually, thanks largely to the Greater Poland Uprising, was reconstituted as a part of the Second Polish Republic and became one of its most productive regions. Reconstitution of Poland During World War I, all the Allies agreed on the reconstitution of Poland that United States President Woodrow Wilson proclaimed in Point 13 of his Fourteen Points. A total of 2 million Polish troops fought with the armies of the three occupying powers, and 450,000 died. Shortly after the armistice with Germany in November 1918, Poland regained its independence as the Second Polish Republic (II Rzeczpospolita Polska). It reaffirmed its independence after a series of military conflicts, the most notable being the Polish–Soviet War (1919–1921) when Poland inflicted a crushing defeat on the Red Army at the Battle of Warsaw, an event which is considered to have ultimately halted the advance of Communism into Europe and forced Vladimir Lenin to rethink his objective of achieving global socialism. Nowadays the event is often referred to as the "Miracle at the Vistula". During this period, Poland successfully managed to fuse the territories of the three former partitioning powers into a cohesive nation state. Railways were restructured to direct traffic towards Warsaw instead of the former imperial capitals, a new network of national roads was gradually built up and a major seaport was opened on the Baltic Coast, so as to allow Polish exports and imports to bypass the politically charged Free City of Danzig. The inter-war period heralded in a new era of Polish politics. Whilst Polish political activists had faced heavy censorship in the decades up until the First World War, the country now found itself trying to establish a new political tradition. For this reason, many exiled Polish activists, such as Jan Paderewski (who would later become Prime Minister) returned home to help; a great number of them then went on to take key positions in the newly formed political and governmental structures. Tragedy struck in 1922 when Gabriel Narutowicz, inaugural holder of the Presidency, was assassinated at the Zachęta Gallery in Warsaw by painter and right-wing nationalist Eligiusz Niewiadomski. The 1926 May Coup of Józef Piłsudski turned rule of the Second Polish Republic over to the Sanacja movement. By the 1930s Poland had become increasingly authoritarian; a number of 'undesirable' political parties, such as the Polish Communists, had been banned and following Piłsudski's death, the regime, unable to appoint a new leader, began to show its inherent internal weaknesses and unwillingness to cooperate in any way with other political parties. World War II The Sanacja movement controlled Poland until the start of World War II in 1939, when Nazi Germany's and Slovakia Invasion of Poland (1939) on 1 September and the Soviet invasion of Poland on 17 September, which followed the breaking of the Soviet–Polish Non-Aggression Pact, occurred. Warsaw capitulated on 28 September 1939. As agreed in the Molotov–Ribbentrop Pact, Poland was split into two zones, one occupied by Nazi Germany while the Kresy, or Borderlands, fell under the control of the Soviet Union. In 1939–1941, the Soviets had moved hundreds of thousands of Poles across the Soviet Union, and the Soviet secret police, NKVD, had executed thousands of Polish prisoners of war (inter alia Katyn massacre). In addition to the organised units of the 1st Army and the Forces in the Nazi-occupied Europe, the domestic underground resistance movement, the Armia Krajowa, or Home Army, fought to free Poland from German occupation and establish an independent Polish state. The wartime resistance movement in Poland was one of the three largest resistance movements of the entire war and encompassed an unusually broad range of clandestine activities, which essentially functioned as an underground state complete with degree-awarding universities and a court system. The resistance was, however, largely loyal to the exiled government and generally resented the idea of a communist Poland; for this reason, on 1 August 1944 they initiated Operation Tempest and thus began the Warsaw Uprising. The objective of the uprising was to drive the German occupiers from the city and help with the larger fight against Germany and the Axis powers, however secondary motives for the uprising sought to see Warsaw liberated before the Soviets could reach the capital, so as to underscore Polish sovereignty by empowering the Polish Underground State before the Soviet-backed Polish Committee of National Liberation could assume control. However, a lack of available allied military aid and Stalin's reluctance to allow the 1st Army to help their fellow countrymen take the city, ultimately led to the uprising's failure and subsequent planned destruction of the city. At the war's conclusion, Poland's territory was shifted westwards, pushing the Kresy in accordance with the Curzon Line. Meanwhile, the western border moved to the Oder-Neisse line. As a result, Poland's territory was reduced by 20%, or . The shift forced the migration of millions of people, most of whom were Poles, Germans, Ukrainians, and Jews. Of all the countries involved in the war, Poland lost the highest percentage of its citizens: over 6 million perished – nearly one-fifth of Poland's population — half of them Polish Jews. Over 90% of deaths were non-military in nature. Only in the 1970s did Poland again approach its prewar population levels. An estimated 600,000 Soviet soldiers died in conquering Poland from German rule. Postwar communist Poland At the insistence of Joseph Stalin, the Yalta Conference sanctioned the formation of a new Polish provisional and pro-Communist coalition government in Moscow, which ignored the Polish government-in-exile based in London; a move which angered many Poles who considered it a betrayal by the Allies. In 1944, Stalin had made guarantees to Churchill and Roosevelt that he would maintain Poland's sovereignty and allow democratic elections to take place; however, upon achieving victory in 1945, the occupying Soviet authorities organised an election which constituted nothing more than a sham and was ultimately used to claim the 'legitimacy' of Soviet hegemony over Polish affairs. The Soviet Union instituted a new communist government in Poland, analogous to much of the rest of the Eastern Bloc. As elsewhere in Communist Europe the Soviet occupation of Poland met with armed resistance from the outset which continued into the fifties. The People's Republic of Poland (Polska Rzeczpospolita Ludowa) was officially proclaimed in 1952. In 1956 after the death of Bolesław Bierut, the régime of Władysław Gomułka became temporarily more liberal, freeing many people from prison and expanding some personal freedoms. A similar situation repeated itself in the 1970s under Edward Gierek, but most of the time persecution of anti-communist opposition groups persisted. Despite this, Poland was at the time considered to be one of the least oppressive states of the Soviet Bloc. Labour turmoil in 1980 led to the formation of the independent trade union "Solidarity" ("Solidarność"), which over time became a political force. Despite persecution and imposition of martial law in 1981, it eroded the dominance of the Communist Party and by 1989 had triumphed in Poland's first partially free and democratic parliamentary elections since the end of the Second World War. Lech Wałęsa, a Solidarity candidate, eventually won the presidency in 1990. The Solidarity movement heralded the collapse of communist regimes and parties across Europe. A shock therapy programme, initiated by Leszek Balcerowicz in the early 1990s enabled the country to transform its socialist-style planned economy into a market economy. As with all other post-communist countries, Poland suffered temporary slumps in social and economic standards, but it became the first post-communist country to reach its pre-1989 GDP levels, which it achieved by 1995 largely thanks to its booming economy. Most visibly, there were numerous improvements in human rights, such as the freedom of speech, internet freedom (no censorship), civil liberties (1st class) and political rights (1st class), according to Freedom House. In 1991, Poland became a member of the Visegrád Group and joined the North Atlantic Treaty Organization (NATO) alliance in 1999 along with the Czech Republic and Hungary. Poles then voted to join the European Union in a referendum in June 2003, with Poland becoming a full member on 1 May 2004. Subsequently Poland joined the Schengen Area in 2007, as a result of which, the country's borders with other member states of the European Union have been dismantled, allowing for full freedom of movement within most of the EU. In contrast to this, the section of Poland's eastern border now comprising the external EU border with Belarus, Russia and Ukraine, has become increasingly well protected, and has led in part to the coining of the phrase 'Fortress Europe', in reference to the seeming 'impossibility' of gaining entry to the EU for citizens of the former Soviet Union. On 10 April 2010, the President of the Republic of Poland, Lech Kaczyński, along with 89 other high-ranking Polish officials died in a plane crash near Smolensk, Russia. The president's party were on their way to attend an annual service of commemoration for the victims of the Katyń massacre when the tragedy took place. In 2011, national census elections took place. Polish presidency in the EU started that year and the elections took place (they were won by the ruling party). Poland joined European Space Agency in 2012, as well as organised the Euro 2012 (along with Ukraine) and the process of digitisation started. In 2013 Poland also became a member of the Development Assistance Committee.
0.6591
FineWeb
```json [ "History of Poland", "Geography of Poland", "Politics of Poland" ] ```
Step 1: Preheat oven to 375° F / 190 °C. Step 2: Peel, quarter, cut the apples into 1/8 inch slices. Toss them in a bowl with 1/2 cup of sugar and cinnamon. Arrange them in the pastry shell. Bake in the oven for about 20 minutes. Then let cool. Step 3: Beat the egg and 1/2 cup of sugar until mixture is pale yellow. Beat in the flour, then the whipping cream, and finally the brandy. Pour in the mixture over the apples. Step 4: Return to oven for 10 minutes. Sprinkle with sugar and return again to oven for Keep warm until ready to Wine suggestion: Cider (French Wine Guide).
0.6158
FineWeb
["Baking Instructions", "Apple Preparation", "Wine Suggestion"]
ON THIS PAGE: You will find some basic information about this disease and the parts of the body it may affect. This is the first page of Cancer.Net’s Guide to Childhood Brain Stem Glioma. To see other pages, use the menu on the side of your screen. Think of that menu as a roadmap to this full guide. About the brain stem The brain stem connects the brain to the spinal cord. It is the lowest portion of the brain, located above the back of the neck. The brain stem controls many of the body’s basic functions, such as motor skills, sensory activity, coordination and walking, the beating of the heart, and breathing. It has three parts: - The midbrain, which develops from the middle of the brain - The medulla oblongata, which connects to the spinal cord - The pons, which is located between the medulla oblongata and the midbrain About brain stem glioma Brain stem glioma is a type of central nervous system (CNS; brain and spinal cord) tumor that begins when healthy cells in the brain stem change and grow uncontrollably, forming a mass called a tumor. A tumor can be cancerous or benign. A cancerous tumor is malignant, meaning it can grow and spread to other parts of the body. A benign tumor means the tumor can grow but will not spread. A glioma is a tumor that grows from a glial cell, which is a supportive cell in the brain. Usually, by the time brain stem glioma is diagnosed, it is most often diffuse, which means it has spread freely through the brain stem. This type of tumor is typically very aggressive, meaning that it grows and spreads quickly. A small percentage of brain stem tumors are very localized, called focal tumors. A focal tumor is often less likely to grow and spread quickly. Brain stem glioma occurs most commonly in children between five and 10 years old. Most brain stem tumors develop in the pons and grow in a part of the brain stem where it can be difficult to perform surgery, making brain stem glioma challenging to treat (see the Treatment Options section). This section covers brain stem glioma diagnosed in children. Read more about brain tumors in adults. Looking for More of an Overview? If you would like additional introductory information, explore these related items. Please note these links will take you to other sections on Cancer.Net: - ASCO Answers Fact Sheet: Read a one-page fact sheet (available as a PDF) that offers an easy-to-print introduction to CNS tumors. - Cancer.Net Patient Education Videos: View short videos led by ASCO experts in childhood cancers and brain tumors that provide basic information and areas of research. - Cancer.Net En Español: Read about brain stem glioma in Spanish. Infórmase sobre el glioma de tronco encefálico en español. The next section in this guide is Statistics and it helps explain how many children are diagnosed with this disease and general survival rates. Or, use the menu on the side of your screen to choose another section to continue reading this guide.
0.9901
FineWeb
["Brain Stem", "Brain Stem Glioma", "Treatment Options"]
All About Kettering... - Kettering is a city in Greene and Montgomery Counties in Ohio. - Only a small portion of the city lies in Greene County. - It is a suburb of nearby Dayton. - Kettering is the largest suburb in the Dayton area. - The earliest resident, John Patterson, was a farmer who settled there in 1798. - In 1841, Van Buren Township was formed as the area grew. - The township was renamed in 1952 as the Village of Kettering. - Kettering was named for Charles F. Kettering, the inventor of the automobile self-starter. - On June 24, 1955, Kettering was incorporated and gained city status. - Charles F. Kettering resided in Kettering in his home, Ridgeleigh Terrace, from 1914 until his death in 1958. - It has a total area of 18.7 square miles. - There were 57,502 people, and 25,657 households at the time of the 2000 Census. - The median income for a household in Kettering was $45,051 in 2000. - The largest employers of residents in the area include: Kettering Medical Center - a hospital, Reynolds and Reynolds, GE Money - a division of General Electric, Limited Brands, The Berry Company - a subsidiary of Bellsouth which publishes the Yellow Pages directories and maintains its division headquarters in Kettering, and Wright Patterson Air Force Base. - The city population and area businesses continue to grow and expand due to the 2006 decision of the Base Closure and Realignment Commission, which is currently bringing thousands of new jobs to Wright Patterson Air Force Base until December 2010. - Kettering is bordered by Dayton and Oakwood to the north; West Carrollton and Moraine to the west; Centerville and Washington Township to the south; and Beavercreek to the east. Kettering has 20 parks totaling 284 acres. The Fraze Pavilion is a major outdoor entertainment venue that hosts the summer concerts of the Dayton Philharmonic.
0.7347
FineWeb
["History of Kettering", "Geography and Demographics", "Economy and Infrastructure"]
Lost in Transition: Constructing Memory in Contemporary Spain H. Rosi Song This book examines how the political period in Spain following Franco's death, known as the Transición, is being remembered by a group of writers, filmmakers and TV producers born in the sixties and early seventies. Reading against the dominant historical account that celebrates Spain’s successful democratisation, this study reveals how recent television, film and fiction recreate this past from a generational perspective, linking the experience of the Transición to the country’s present political and financial crises. Privileging above all an emotional connection, these artists use personal feelings about the past to analyse and revisit the history of their coming-of-age years. Lost in Transition considers the implications of adopting such a subjective positioning towards history that encourages an unending narrative, always in search of more meaningful and intimate connections with the past. Taking into account recent theoretical approaches to memory studies, this book proposes a new look at the production of memory in contemporary Spain and its close relationship to popular culture, shifting the focus from what is remembered to how the past is recalled affectively to be made part of an ongoing and enduring everyday experience. H. Rosi Song is Associate Professor of Spanish at Bryn Mawr College and the co-editor of Traces of Contamination: Unearthing the Francoist Legacy in Contemporary Spanish Discourse (Bucknell UP, 2004). [List of Illustrations] 2. Ordinary Memories: Feeling the Past 3. The Moment of Memory 4. Mediating Memory (or Telling How It Happened) 5. Transitional Stories Copyright: © 2016 Publication: May 6, 2016 Series: Contemporary Hispanic and Lusophone Cultures 12
0.7357
FineWeb
["Memory in Contemporary Spain", "Spanish Transition Period", "Memory Studies"]
Life Cycle Assessment of New Oxy-Fuels from Biodiesel-Derived Glycerol AbstractBiodiesel is obtained by the process of transesterification of vegetable oils and animal fats and crude glycerol is the main by-product of the biodiesel manufacturing chain. As a result glycerol production has rapidly increased in the last decades. This work focuses on the development and the validation of a process to convert biodiesel-derived glycerol into a fuel for internal combustion engines. In order to obtain a higher conversion efficiency it was necessary to convert crude glycerol to tert-butyl ethers by means of an etherification process that was carried out in the laboratory. Then the obtained glycol-ethers mixture (GEM) was blended with a commercial diesel fuel to improve its thermal efficiency. In this paper a life cycle analysis for these GEM/diesel blends was carried out using a Life Cycle Assessment (LCA) methodology, in order to evaluate the environmental impacts of these new oxy-fuels; from GEM production to GEM use as an additive for diesel fuel. The LCA results highlight that the use of these new oxy-fuels in diesel engines can lead to an effective reduction in terms of greenhouse gases emissions throughout the entire life cycle. View Full-Text Share & Cite This Article Asdrubali, F.; Cotana, F.; Rossi, F.; Presciutti, A.; Rotili, A.; Guattari, C. Life Cycle Assessment of New Oxy-Fuels from Biodiesel-Derived Glycerol. Energies 2015, 8, 1628-1643. Asdrubali F, Cotana F, Rossi F, Presciutti A, Rotili A, Guattari C. Life Cycle Assessment of New Oxy-Fuels from Biodiesel-Derived Glycerol. Energies. 2015; 8(3):1628-1643.Chicago/Turabian Style Asdrubali, Francesco; Cotana, Franco; Rossi, Federico; Presciutti, Andrea; Rotili, Antonella; Guattari, Claudia. 2015. "Life Cycle Assessment of New Oxy-Fuels from Biodiesel-Derived Glycerol." Energies 8, no. 3: 1628-1643.
0.5433
FineWeb
``` { "topics": [ "Biodiesel-Derived Glycerol", "Life Cycle Assessment", "Oxy-Fuels" ] } ```
This work claims that Musaylima served as a prophet for his own people in Yamama in more or less the same way as the Prophet Mu?ammad in Mecca and Medina. However, unlike Islam, Musaylima’s religious movement did not survive. Here, a complete story of Musaylima – his claim of prophethood, qur’an (reading), religious activities, followers, opponents, and defeat – is reconstructed. A critical reading of the sources that contain the accounts of Musaylima is performed. Additionally, this study reveals that the remaining fragments of Musaylima’s qur’an bear substantial similarities to the early Meccan verses of the Qur’an – in terms of diction, style, and pattern. To formulate the findings of this study, there was more than one prophet in the sixth-seventh century of the Arabian peninsula, as Umayya b. Abi ?alt, Abu ‘Amir, Tulay?a, Saja?, Aswad, and Musaylima claimed prophethood. There was more than one qur’an, as Musaylima also revealed his own qur’an. It is possible that other prophets also did so. There was more than one mosque (masjid), since the followers of Abu ‘Amir established their own, as did those of Abu Qays. So did the followers of Musaylima. There was more than one Abrahamic ?anif monotheistic movement in the ?ijaz and around the region, as some figures assumed the same mission.
0.943
FineWeb
["Musaylima's prophethood", "Comparative analysis of religious movements", "Historical context of Arabian peninsula"]
Rizwana Shah is from one of the most conservative areas of Pakistan. Her parents got her married once she completed the 12th grade, and she was quickly forced to find a job to support the family income. Rizwana set up a sewing business with a micro-credit loan and was able to expand her business activities. Her improved economic circumstances motivated to Rizwana to help other poor, traditionally home-bound women in her region. “Women in our society are seen as a source of burden instead of peace,” Rizwana argues. “Men assume the most power-ful positions in their households and are the sole decision-makers. with their skewed financial power, men believe they have a right to abuse women.” In the wake of this reflection, Rizwana advocates for peace as a source of inclusion: “We need a form of peace that necessitates equality between all gen- ders, castes, age groups, religious groups and political groups, and the threshold to accept inherent differences. For such a peace to exist, Pakistan —and the whole world— would have to go through a cultural revolution and transformation.”
0.6655
FineWeb
["Women Empowerment", "Social Inequality", "Peace and Inclusion"]
Devices called Smellitizers can be found all over Disney parks, and emit scents in certain areas to match the surroundings. More My Little Pony Applejack Coloring Pages Coloring Books - What Are They? A coloring book is a book where graphics are drawn using lines and which readers can fill up and color by making use of things like colorful pencils, pens, crayons and also other differing kinds of paint and colors. Although most coloring books are designed for youngsters, there are several that are made for adults. These coloring books, and coloring pages and sheets are generally printed out on paper or cardboard, and some books are fashioned so that each page can be detached from the book once it is completed and used as a stand-alone coloring page. Many coloring books have a distinct theme or tell a story, which means the pages can't be torn out them or the book loses its purpose. Many coloring books these days include coloring pages of extremely popular cartoon characters, such as Lego Ninjago coloring pages or Teenage Mutant Ninja Turtles coloring pages. These are commonly used as advertising materials for upcoming cartoons or animated films. Coloring is not the only activity that can be found in coloring books, as other activities include mazes, numbers, and lots of fun puzzles. There are also types of coloring books where young children can use stickers to stick on the pages and tell their own stories with them. The History of Coloring Books The very first coloring book was created in America. A process called the democratization of art was the cause of the birth of coloring books. The man behind this process was an artist from England called Josh Reynolds. The other two educators who really helped this process were Johann Pestalozzi from Switzerland and the German Friedrich Fröbel who was his student. Various teachers and educators from all over the world knew the necessity of art and the value of teaching art to people, especially youngsters. The development of cognitive abilities, problem solving skills and social skills were all parts of benefit art provided. The skills that a person develops in art education as a kid are incredibly likely to prove incredibly useful in his future as an adult. He might even end up finding his passion in life or a career due to developing an early artistic skill. Spiritual, social and mental growth are all enhanced in young children who partake in artistic activities. The McLoughlin brothers are the two people who invented the coloring book. The 'Little Folk's Painting Book was the McLoughlin Brother's first coloring book, and they made it in the 1880's. For this book, they worked with a lady called Kate Greenway, and they kept making coloring books until the 1920's. Afterwards, the McLoughlin Brothers became part of the Milton Bradley company. Richard Outcault is another man who really helped the progression of the coloring book. In 1907, Richard created a coloring book called Buster's Paint Book which was based on a character he had created in 1902 named Buster Brown. This book is what started the process of companies using coloring books as a means for advertising their products. These books were initially made to be painted instead of being colored, and despite the emergence and popularity of crayons in the 1930’s, this did not change. The Uses of Coloring Books in Education These days coloring books are commonly used in schools for teaching kids about different things. Coloring books covering endless topics can be found, such as numbers, animal names, and names of cities and continents. Images are easier to recall for young children, and coloring them is more exciting than other methods of learning, which is why coloring books are an excellent way of learning. Images are also a great way of teaching groups and individual a new language. They are also utilized in certain classes as a way of enhancing creativity and knowledge of different topics such as geometry. Coloring books for adults and graduate level students have been created since the 1980’s by big publishers who aimed to focus on these groups and educate them on physiology and anatomy, as these images with numerous colors facilitated learning and memorizing also.
0.9636
FineWeb
```json [ "Coloring Books", "History of Coloring Books", "Uses of Coloring Books in Education" ] ```
Try a little probability to sharpen your mind before the weekend. Click here for the answer and explanation. Since there are two members, and only one can go from each team, we are choosing two teams out of three. For each of those teams, we are choosing one member out of two. And since each of these choices is independent, the number of choices multiplies. So, our answer = 3C2 × 2C1 × 2C1 = 12. The correct answer is C.
0.7943
FineWeb
["Combinatorics", "Probability", "Statistics"]
Fluoropolymer powders closely examined My company is installing a new finishing line that will apply thermoset powder coatings to our product. Can you tell me what the differences are between each type of thermoset powder? Among thermosetting powder coatings, perhaps the most popular and utilized types of materials include epoxies, acrylics, polyesters, urethanes, and fluoropolymers. Epoxy powders are typically hard and exhibit outstanding substrate protection. Their primary limitation is a relatively weak exterior durability. In contrast, acrylic powders create smooth clear coats with excellent chip resistance and exterior durability. Combined with polyurethanes, acrylics represent one of the unique powder coatings chemistries with many diversified applications. Polyesters are perhaps the most utilized and represent a broad spectrum of properties and applications, with varieties of chemistries often facilitating hybrid combinations. Top of the line in terms of exterior durability are flouropolymers, which offer superb photolytic stability and water resistance.
0.5992
FineWeb
["Thermoset Powder Coatings", "Types of Thermoset Powders", "Properties and Applications"]
Standardization is vital to reducing the complexity of private clouds. The more disparate systems used, the greater the need for automation and orchestration customization, which increases the cost and complexity. Hardware and software infrastructure components should be looked at with the goal of utilizing one platform for each layer (server, storage, virtualization, and so on) wherever possible (there may be exceptions). For greenfield deployments, this is typically an easier task, as there is no legacy infrastructure to work around. Components can be selected based on cost/features and standardized on moving forward. A common approach to this is pod-based designs of compute, network and storage, which can be purchased and added as capacity demands increase. You’ll either want to settle on a standard for a pod or a standard for individual components if not using a pod design. When deploying private clouds on, or integrating them into, existing infrastructures, the standardization task will be more difficult. Legacy infrastructure not ready for refresh will have to be carefully considered on a case-by-case basis. For example, if new servers are being purchased as a part of the project but there are existing servers in place, you’ll want to carefully consider whether or not to replace them, as well, in order to standardize on a hardware platform. The additional "soft costs" incurred by maintaining the existing servers must be weighed against the "hard costs" of replacing them. Costs such as maintaining additional replacement parts, maintaining staff expertise, and automation integration will need to be considered. This decision will need to be made for each component of the infrastructure. In most cases, a single platform will be suitable for a given component, but there may be exceptions. For example, storage workloads vary widely, and there may be cases where separate storage systems may be utilized for different purposes based on performance and cost. Whenever it looks as though two components will be needed over a single standard, ensure that the benefits outweigh the potential additional soft costs. Also, look for other ways to solve the same problem. Continuing the storage example, technologies such as caching or auto-tiering may be able to provide the same performance levels on a single storage system. Standardization is fundamental to successful private cloud deployments. By standardizing on hardware and software, you are removing variables that can cause cost and risk later. With a standard platform, you’ll require only one set of administrative knowledge and streamline your automation and orchestration roll-out.
0.7401
FineWeb
["Standardization in Private Clouds", "Hardware and Software Infrastructure", "Private Cloud Deployment"]
Let I(x) be the statement "x has an Internet connection" and C(x, y) be the statement " x and y have chatted over the Internet," where the domain for the variables x and y consists of all students in your class. Use quantifiers to express each of these statements. a. Everyone in your class with an internet connection has chatted over the Internet with at least one other student in your class b. There are at least two students in your class who have not chatted with the same person in your class.
0.7795
FineWeb
["Internet Connection", "Student Chat History", "Classroom Communication"]
For the past months there has been an intense debate both in Greece but also in international media regarding the rise of the neo-fascist Golden Dawn in Greece. The reason is obvious: for the first time in a European Union (EU) country a political party that in contrast to most of the varieties of the European far Right does little to hide its openly neo-nazi ideology and the culture of the nazi street gang that has been its trademark since the 1990s, has been one of the rising political forces. Until now the most common mainstream attitude regarding the Golden Dawn is to treat it as a worrying exception in a period of social and political crisis. This is echoed in the official discourse about the rise of the political “extremes” in Greece (the other extreme being the Left) against which the institutions of liberal democracy must be defended. However, what is missing from this discourse on Golden Dawn as an extreme exception is a serious discussion of what led to its spectacular entrance in the Greek political scene. Such a discussion and scrutiny would have shown that the rise of Golden Dawn would not have been possible, without the persisting reproduction in the discourse of the traditional Greek Right of very authoritarian conservative “law and order” elements, galvanized by the equally persisting ideological and institutional anticommunism that is the common trait that links the collaborators with the Nazis during the Occupation, the post-Civil War governments that imprisoned and executed left wing militants, the 1967-74 military dictatorship, and the post-1974 authoritarian democracy. This long authoritarian and anticommunist tradition of the Greek Right that can also account for the conscious choice, for many years, of the leadership of the security forces to not only tolerate but also endorse far-right ideological orientations especially within police special forces, based on the assumption that only a perverse combination of militarism and fascism can make a 20 year old believe that the merciless beating of protesters is a decent way to make a living. The result is that in the 2012 general election approximately 50 per cent of riot police voted for Golden Dawn. Moreover, the rise of the Golden Dawn cannot be explained without taking into account the conscious choice from the part of mainstream parties, New Democracy and PASOK, to endorse anti-immigrant politics and rhetoric and open racism. When in January 2011 PASOK ministers were referring to the 300 heroic immigrant hunger strikers as if they were sub-humans and were planning a big program of building new detention centers, or when today the Samaras government announces a crackdown on “illegal immigrants,” it is obvious that the average Golden Dawn member or supporter feels entirely justified in promoting and engaging in open violence against immigrants. In the same manner, such official anti-immigrant policies act as a justification for those employers who rely on immigrant labour to engage in extreme cases of exploitation and violence, exemplified in the recent beating and chaining of an Egyptian immigrant working at a bakery in Salamis who simply demanded to get his wages. No wonder that such strata of small employers have increasingly turned toward the fascists. At the same time the rise of Golden Dawn is also an indication of an extreme social and political crisis in Greece and widespread feelings of anger that do not always take the form of collective struggle. When anger takes the form of such an individualized despair and is combined with the repulsing ideological residue of authoritarianism, conservatism, nationalism, “lumpen” sexism and violence, still pervading aspects of Greek society, then it takes the form not of social solidarity and common struggle but of a variety of social cannibalism that can easily be represented and manipulated by the fascists. In order to understand the rise of Golden Dawn, we must also take into consideration another important development in the past two years, namely the fact that the political system has more or less endorsed the transition toward an authoritarian “state of emergency” as a result of the economic crisis. The practice of passing laws through parliament in special fast sessions, without any actual discussion, even if we are talking about the complete overhaul of labour law, the dismantling of social rights through simple acts of cabinet, the extreme police violence against protesters and the intensified effort toward the penalization of collective forms of protest (occupations etc.), lead to the legitimization of a “strong State” that will “deal with anomie.” This, in its turn, only gives extra legitimacy to the Golden Dawn's openly authoritarian, ultra-conservative, and – despite the populist references to “Greek workers” – openly pro-business discourse. In a way we are dealing with complementary processes: On the one hand we have the authoritarian turn of government and state practices, in a combination of neoliberalism and authoritarian statism, in a situation marked by open political crisis, an erosion of popular sovereignty and a complete disregard for democratic procedure. On the other hand, we have a shift of the discourse of mainstream parties to the right, both as an expression of the changes in policies and strategy but also as a result of the rise of Golden Dawn, which also acts as a catalyst for this right wing turn. The recent wave of anti-immigrant raids by Greek police and the decision of the Samaras government to make naturalization even more difficult (in a country where naturalization was already practically impossible for most immigrants) are just some of the examples of these changes. This does not mean that we are dealing simply with Golden Dawn changing the political debate; in reality mainstream parties use the rise of Golden Dawn as an opportunity to bring forward their own more conservative positions. In light of the above, the cynicism of the “two extremes” schema becomes evident. According to this the social and political crisis has led large segments of Greek society in a state of anomie exemplified both in “anomic” social protests, but also the rise of Golden Dawn. For some neoliberal commentators the rise of Golden Dawn is an indirect result of the 2011 Movement of the Squares, because of the latter's supposedly “anti-parliamentary” and “populist” discourse. Nothing is further from the truth. Were it not for the Movement of the Squares in May-June 2011, that followed the winter 2010-2011 with its series of racist attacks, the situation regarding the far Right would have been worse. In reality, the “two extremes theory” is an attack on mass movements and the Left. By hypocritically attacking Golden Dawn's “anomie” they attempt to legitimize the backlash against mass protest, the use of extremely violent police practices (preventive arrests, plastic bullets, water cannons), the attempt to penalize forms of protest that until now were considered legitimate (occupations of public buildings etc.). Combating the Right The answer to the Golden Dawn can only come from the Left and mass social movements. And this is an urgent task, since currently the rise of the appeal of Golden Dawn acts as a barrier to the further shift to the Left of petty bourgeois and labour strata that until recently were part of the electorate of mainstream parties. ” Therefore it is no use for the Left to think the potential answer to the rise of Golden Dawn in terms of a broad alliance of the parties of the “constitutional arc.” Despite their rhetoric condemnation of Golden Dawn, both PASOK and DHMAR (Democratic Left) have pushed for the current authoritarian turn and have accepted the condition of limited democracy and popular sovereignty imposed by the EU-IMF-ECB Troika. The answer to the Golden Dawn can only come from the Left and mass social movements. And this is an urgent task, since currently the rise of the appeal of Golden Dawn acts as a barrier to the further shift to the Left of petty bourgeois and labour strata that until recently were part of the electorate of mainstream parties. But this means the Left must deal with important challenges: A) It is well known that fascist movements are always fuelled by despair and insecurity, especially individualized despair. Unless broad segments of society regain some form of collective confidence in the ability to change their lives, through struggle, collective fight and solidarity, one can expect the continuous rise of fascists. The simple promise of a Left government cannot lead to such a collective confidence. If we cannot show that collective struggle can make sure that no household will be without electricity, no person without some access to medical care, no child without school lunch, if we do not manage to actually resist aspects of the austerity measures, if we do not show that solidarity between Greek and immigrant workers is the best way to make neighborhoods safer, then the appeal of the Golden Dawn, with all its over-publicized acts of “solidarity only for Greeks,” will continue to rise. Golden Dawn is trying to build the forms of its own reactionary hegemony in segments of the subaltern classes; the Left has not the luxury of simply waiting for governmental power as a “ripe fruit.” B) Moreover, the Left should not leave “national” references to the fascists. The current conjuncture also takes the form of a national crisis. Not in the sense described by traditionally reactionary nationalist discourse about a “conspiracy of foreigners against the nation,” but in the sense that the almost neocolonial supervision of a society by the Troika, the attempt to turn Greece into a “Special Economic Zone” for foreign capital, the new forms of “European Economic Governance,” lead to a condition of limited national and popular sovereignty that Greek capital fully accepts as a means to impose an aggressive accumulation regime. The Left needs to rethink some form of collective identity for the people living and working in this country, not in the sense of nationalism but of the collective effort to build a common future. The anticapitalist social alliance that can lead to the “Socialism of the 21st Century” cannot be some form of a “nomadic multitude.” It will be much closer to a renewed form of a “sovereign people” engaged in a democratic process of social transformation. C) The Left cannot leave questions of corruption to the far Right rhetoric. Attacking corruption and the links between mainstream politicians and Capital does not mean endorsing some form of reactionary discourse on “cleptocracy.” Nor does it mean fantasizing about a more “moral” capitalism. But capitalism is not only about impersonal social forms and structures; it is also based on very specific money ties between business and politics. During the Movement of the Squares actions, one of the most common thing to be heard was the repeated cry “Thieves!” directed against parliament, this was not a manifestation of a right-wing “antipolitical” stance, but a justified reaction against a political class that has combined full endorsement of aggressive neoliberalism with increasing personal wealth. D) Finally the Left does not have the luxury to avoid direct confrontation with the fascists and their military and territorial conception of politics in terms of controlling areas and neighborhoods. Although anti-fascist politics, as shown above, have mainly to do with creating conditions of a renewed radical Left hegemony, finding ways to drive fascists outside of neighborhoods and preventing them from freely operating is a crucial challenge for the Left. Greek society is at a crossroads. The forces of capital have made it clear that they have adopted a “no holds barred” aggressive and authoritarian tactic. The rise of Golden Dawn, regardless of how long it will last or whether it will be countered by a process of reconstruction of the Right, is already acting as a catalyst for an authoritarian political and ideological turn. No wonder that the current rising star in New Democracy is Makis Voridis, an ex far-Right militant that is combining aggressive authoritarianism with an equally aggressive neoliberalism! Therefore, the challenge for the Left is not simply to rely on electoral trends but to build an alternative hegemony. Otherwise, the danger of an even reactionary turn will remain imminent. Panagiotis Sotiris teaches social theory, social and political philosophy at the Department of Sociology of the University of the Aegean and maintains a blog at lastingfuture.blogspot.gr where this article first appeared. He can be reached at psot_AT_soc.aegean.gr
0.5713
FineWeb
```json [ "The Rise of Golden Dawn", "Authoritarianism and the Greek Right", "Combating the Far Right in Greece" ] ```
Helps prevent organ rejection after transplant surgery. SandIMMUNEThere may be other brand names for this medicine. When This Medicine Should Not Be Used:This medicine is not right for everyone. You should not receive it if you had an allergic reaction to cyclosporine or castor oil. How to Use This Medicine: - Your doctor will prescribe your dose and schedule. This medicine is given through a needle placed in a vein. - A nurse or other health provider will give you this medicine. Drugs and Foods to Avoid: Ask your doctor or pharmacist before using any other medicine, including over-the-counter medicines, vitamins, and herbal products. - The list below includes some of the most commonly used medicines that can interact with cyclosporine. There are many other drugs not listed. Make sure your doctor knows the names of all the medicines you use. Tell your doctor if you are taking any of the following: - St John's wort, amiodarone, boceprevir, bromocriptine, colchicine, danazol, digoxin, methotrexate, metoclopramide, orlistat, sulfinpyrazone, tacrolimus, ticlopidine, or telaprevir - Any medicine to treat an infection, an NSAID pain or arthritis medicine (such as aspirin, diclofenac, ibuprofen, naproxen, celecoxib), birth control pills, medicine for seizures, medicine to lower cholesterol, a steroid, a diuretic (water pill), high blood pressure medicine, a potassium supplement, or medicine or treatment for psoriasis, including PUVA or UVB light therapy - If you are using sirolimus, take it 4 hours after you receive cyclosporine. - This medicine may interfere with vaccines. Ask your doctor before you get a flu shot or any other vaccines. - Do not eat grapefruit or drink grapefruit juice while you are using this medicine. Warnings While Using This Medicine: - Tell your doctor if you are pregnant or breastfeeding, or if you have kidney disease, liver disease, anemia, a bleeding disorder, high blood pressure, eye or vision problems, or a history of seizures. - This medicine may cause the following problems: - Increased risk of infections - Increased risk of cancer, including skin cancer - Liver problems - Kidney problems - Hyperkalemia (high potassium levels in the blood) - High blood pressure - Nervous system disorder - This medicine may make you bleed, bruise, or get infections more easily. Take precautions to prevent illness and injury. Wash your hands often. - Your doctor will do lab tests at regular visits to check on the effects of this medicine. Keep all appointments. Possible Side Effects While Using This Medicine: Call your doctor right away if you notice any of these side effects: - Allergic reaction: Itching or hives, swelling in your face or hands, swelling or tingling in your mouth or throat, chest tightness, trouble breathing - Change in how much or how often you urinate, bloody urine - Confusion, weakness, uneven heartbeat, trouble breathing, or numbness in your hands, feet, or lips - Dark urine or pale stools, nausea, vomiting, loss of appetite, stomach pain, yellow skin or eyes - Fast heartbeat, dizziness, headache, vision changes - Fever, chills, cough, sore throat, or body aches - Rapid weight gain, swelling in your face, hands, feet, or lower legs - Seizures or tremors - Unusual bleeding, bruising, or weakness If you notice these less serious side effects, talk with your doctor: - Diarrhea, stomach pain - Increased hair growth, especially on your face - Swelling of your gums If you notice other side effects that you think are caused by this medicine, tell your doctor Call your doctor for medical advice about side effects. You may report side effects to FDA at 1-800-FDA-1088 Last Updated: 11/4/2014 The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. Copyright © 1984- Truven Health Analytics. All rights reserved.
0.8801
FineWeb
["Cyclosporine Usage", "Drug Interactions", "Side Effects"]
Peer coaching is a confidential process through which two or more professional colleagues work together to reflect on current practices; expand, refine, and build new skills; share ideas; teach one another; conduct classroom research; or solve problems in the workplace. Although peer coaching seems to be the most prominent label for this type of activity, a variety of other names are used in schools: peer support, consulting colleagues, peer sharing, and caring. These other names seem to have evolved, in some cases, out of teacher discomfort with the term coaching. Some claim the word coaching implies that one person in the collaborative relationship has a different status. This discomfort is to be expected because the label may imply to some an inequality among colleagues that is inconsistent with the historical norm of a nonhierarchical structure within the teaching ranks. As research and experience inform us, “The reality is that a teacher has the same ‘rank’ in his or her last year of teaching as the first” (Sizer 1985). Teachers have the same classroom space, number of students, and requirements. Regardless of how coaching relationships are labeled, they all focus on the collaborative development, refinement, and sharing of craft knowledge. Peer coaching has nothing to do with evaluation. It is not intended as a remedial activity or strategy to “fix” teachers. Several school systems have supported peer coaching as a way to increase feedback about instruction and curriculum. One teacher, reflecting on the support that peer coaching offers before the formal evaluation process, described it as “a dress rehearsal before the final performance.” Another spoke of peer coaching as “a time when you can take risks and try out new ideas, instructional strategies, or different approaches to the curriculum and discuss the results with a trusted colleague.”…from http://www.ascd.org.
0.7846
FineWeb
``` { "topics": [ "Peer Coaching", "Collaborative Development", "Teacher Support" ] } ```
Safeguarding is everyone’s business. If you’re concerned about a vulnerable adult who is being harmed, or an adult who cannot protect themselves from harm, the following information may help you to decide if you need to contact us. What is safeguarding adults? Safeguarding is the multi-disciplinary work we do to minimise and manage risk to adults who may be vulnerable with the aim of: - Prevention, empowerment and protection of adults - The retention of independence, well being and choice - The right to access a life free from abuse and neglect The safeguarding duties outlined within the ‘Care and Support Statutory Guidance, October 2014’ apply to an adult who: - Has needs for care and support (whether or not the local authority is meeting any of those needs) - Is experiencing, or at risk of, abuse or neglect - As a result of those care and support needs, is unable to protect themselves from either the risk of, or the experience of abuse or neglect All staff should refer to and use as appropriate the information and contacts below when considering making an alert. Who is vulnerable to abuse and in what way? Adults may be vulnerable because they are frail or have ill health, a learning or physical disability. They may be unable to take care of themselves against significant harm or exploitation. The Care Act 2014 provides a comprehensive framework for the care and protection of adults, stating the following aims: - To stop abuse or neglect wherever possible - Prevent harm and reduce the risk of abuse or neglect to adults with care and support needs - Safeguard adults in a way that supports them in making choices and having control about how they want to live - Promote an approach that concentrates on improving the life of the adult concerned - Raise public awareness so that communities as a whole, alongside professionals, play their part in preventing, identifying and responding to abuse and neglect - Provide information and support in accessible ways to help people understand the different types of abuse, how to stay safe and what to do to raise a concern about the safety or well-being of an adult - Address what has caused the abuse or neglect In order to achieve these aims, it is necessary to: - ensure that everyone, both individuals and organisations, are clear about their roles and responsibilities; - create strong multi-agency partnerships that provide timely and effective prevention of and responses to abuse or neglect; - support the development of a positive learning environment across these partnerships and at all levels within them to help break down cultures that are risk-averse and seek to scapegoat or blame practitioners; - enable access to mainstream community resources such as accessible leisure facilities, safe town centres and community groups that can reduce the social and physical isolation which in itself may increase the risk of abuse or neglect; and - clarify how responses to safeguarding concerns deriving from the poor quality and inadequacy of service provision, including patient safety in the health sector, should be responded to. The following six principles apply to all sectors and settings including care and support services, further education colleges, commissioning, regulation and provision of health and care services, social work, healthcare, welfare benefits, housing, wider local authority functions and the criminal justice system. The principles should inform the ways in which professionals and other staff work with adults. The principles can also help SABs and organisations more widely, by using them to examine and improve their local arrangements. - Empowerment – People being supported and encouraged to make their own decisions and informed consent. “I am asked what I want as the outcomes from the safeguarding process and these directly inform what happens.” - Prevention – It is better to take action before harm occurs. “I receive clear and simple information about what abuse is, how to recognise the signs and what I can do to seek help.” - Proportionality – The least intrusive response appropriate to the risk presented. “I am sure that the professionals will work in my interest, as I see them and they will only get involved as much as needed.” - Protection – Support and representation for those in greatest need. “I get help and support to report abuse and neglect. I get help so that I am able to take part in the safeguarding process to the extent to which I want.” - Partnership – Local solutions through services working with their communities. Communities have a part to play in preventing, detecting and reporting neglect and abuse. “I know that staff treat any personal and sensitive information in confidence, only sharing what is helpful and necessary. I am confident that professionals will work together and with me to get the best result for me.” - Accountability – Accountability and transparency in delivering safeguarding. “I understand the role of everyone involved in my life and so do they.” We have a statutory duty under the Care Act 2014 to make arrangements for ensuring that its own functions, and services provided on its behalf, are fulfilled with regard to the need to safeguard adults at risk of neglect or abuse. What are the CCG’s responsibilities in safeguarding adults? CCGs are the major commissioners of local health services and are legally responsible for assuring themselves that the organisations from which they commission services have effective safeguarding arrangements in place – ’ Assurance & Accountability Framework: safeguarding Vulnerable People in the Reformed NHS (2013)’ They have to demonstrate that there are appropriate systems in place for discharging their responsibilities in respect of safeguarding adults, including: - Plans to train their staff in recognising and reporting safeguarding issues - A clear line of accountability for safeguarding, properly reflected in the CCG governance arrangements - Appropriate arrangements to co-operate with local authorities in the operation of Local Safeguarding Adults Boards - Ensuring effective arrangements for information sharing How does the CCG discharge its responsibilities to safeguard adults? The CCG employs a Head of Quality and Safeguarding to provide safeguarding adult expertise to the CCG. The CCG is represented at Executive Level at Teeswide Safeguarding Vulnerable Adults Board and the Head of Quality also attends all local Adult Safeguarding Committees. The Head of Quality and Safeguarding also: - Provides specialist advice and support across the health economy within Teesside. Provides professional advice, expertise and guidance to inform effective commissioning arrangements. Monitors adherence to statutory requirements, CQC and the commissioners specific quality standards relating to safeguarding adults across commissioned provider services. - Attends Tees Wide Multi agency groups providing healthexpertise across the Tees area. What should you do if you have concerns for the safety of an adult? If you are concerned that an adult is at immediate risk of significant harm you should call the police – Dial 999. If your concerns are not immediate but you believe a vulnerable adult may be at risk, you should contact the Social Care department in the area in which the adult lives. Below is a list of contact details of the Local Authority departments covering the HAST CCG locality: When calling the Local Authority be prepared to give as much information as you possibly can. |Stockton on Tees||Hartlepool| |During working hours: Tel: 01642 527764 Out of hours: Emergency Duty Team |During working hours: Tel: 01429 284284 First Contact & Support Hub Out of hours: Emergency Duty Team For further information please refer to: Tees Wide Multiagency Safeguarding Adults Procedures Read the latest news from the Teeswide Safeguarding Adults Board Find out more about Teeswide Safeguarding Adults Board
0.6468
FineWeb
["Safeguarding Adults", "Prevention of Abuse and Neglect", "Reporting Safeguarding Concerns"]
Bear Biologist Carl Lackey releases a 370lb black bear that was found living under a shack at a ski resort in Lake Tahoe, Nevada USA. Black bears historically ranged over most of the forested regions of North America, including all Canadian provinces, Alaska, all states in the conterminous United States, and significant portions of northern Mexico (Hall 1981). Their current distribution is restricted to relatively undisturbed forested regions (Pelton 1982; Pelton et al. 1994). They hibernate throughout the colder months without eating, drinking, urinating, or defecating.
0.6791
FineWeb
```json [ "Black Bear Habitat", "Black Bear Hibernation", "Black Bear Distribution" ] ```
SHA-1 is a Secure Hash Algorithm (SHA) with a cryptographic strength of 80 bits. The algorithm was designed by the U.S. National Security Agency and is published by the National Institute of Standards and Technology (NIST) as a U.S. Federal Information Processing Standard. It has been widely adopted for use in many applications that require secure authentication. The algorithm consists of a very complex set of functions, which results in a one-way hash. This allows the algorithm to provide authentication without secrecy, meaning an encrypted communication channel is not required. A simplified schematic of how a SHA-1 authentication works. The algorithm itself is complex, and a full explanation and documentation can be found here.
0.5856
FineWeb
["SHA-1 Algorithm", "Secure Authentication", "Cryptographic Strength"]
Does it start with the blk/yel hooked up? The blk/yel wire should provide ground to the switch box when the key switch is turned to the off position to kill the engine. If it will not start with the blk/yel hooked up, either the key switch is bad, the safety switch is bad, or the blk/yel is shorted to ground somewhere. One other thing it could be is the tilt mercury switch. The mercury switch should be on the side of the engine block, if yours still has it. “You may all go to hell and I will go to Texas.”....Davy Crockett "Victory or Death"....William Barrett Travis
0.5455
FineWeb
["Engine Troubleshooting", "Key Switch and Wiring", "Safety Switch and Mercury Switch"]
Comparative evaluation of ecosystem services provided by artificial reefs in the Gulf of Thailand – Associated fish communities, their functional diversity and biomass Coral reefs today are simultaneously threatened by anthropogenic local and global stressors. Coastal human population depends on reefs as natural sources of food and income. The value of coral reefs worldwide have been estimated as US$ 30 billion in net benefits of goods and services. Therefore, the protection of these ecosystems benefits social and economic development. A potential measure is the use of artificial reefs to enhance fish production and other reef ecosystem resources. Under the Artificial Reef Program of the Department for Marine and Coastal Resources (DMCR) in Thailand have deployed hard structures in coastal waters since 1978 to create fishing grounds. But until now, the effectiveness in terms of habitat provision and fish production of the installed artificial reefs regarding structure type and material has not been sufficiently assessed. At the study site Koh Phangan 11 structures serve as artificial reefs since 2013. Around the island, seven concrete dice aggregations have been deployed in depth of around 20 meters the bays of Haad Khom, Chaloklum, Mae Haad, Haad Yao, Ban Thai and Thong Nai Pan, as well as four structures made of metal cubes in Haad Khom and Chalokum. Yet, the effect on community structure and diversity of benthos and fishes is unknown. Results will help to compare the benthic and pelagic development of two different types of artificial reef structures with natural reefs as well as the the surrounding environment without artificial structures. Further, it may allow to determine the effectiveness of artificial reefs compared to natural reefs in terms of fish production and habitat provisioning. The findings will help to estimate the economic value and the benefit for the local stakeholders and communities. Results will finally be used to give advice about further artificial reef deployment, such as favourable structure material, structure type and which environmental factors are needed to be considered to support the development of a productive artificial coral reef. Background data to be assessed are Volume, Area covered, Depth Range, Complexity, Texture and composition, Location, Light Availability, Surrounding Habitat, Turbidity, Surface, Deposition date. Data about fish assemblage is conducted via visual underwater surveys. These are adapted to the natural reef sites and to the artificial reef sites respectively. On the natural reef sites, the most suitable method is the Belt Transect Survey, for artificial reefs this method is adapted to the structural set-up of the reef. Both methods allow a ratio of species number per square meters and biomass per square meter, which makes it easier for us to compare the sites. Nutritional status of reef-associated fishes attracted by Fish Aggregation Devices in comparison to fishes found on natural reefs, in the Gulf of Thailand The fish aggregating properties of structures submerged, or drifting in the ocean have been known and used by fishermen for centuries. It has been proposed that fish are attracted to floating or stationary objects, as they provide ‘visual stimulus in the optical void’. Since global fish stocks are in steady decline, measures to rejuvenate fish populations, and their financial yield are increasing. In 2013, four metal structures were deployed near the shores of Koh Phangan to further the local fishing efforts. Recent research indicates, that artificially aggregated fish find little to no food at floating structures, implying a lower quality of life for fish, but studies on permanent structures are still inconclusive. This study aims to determine whether the nutritional status of fish located at artificial reefs differs from the population of natural reefs. Results will indicate whether the structures can (a) aggregate fish, and (b) supply ample food for these fish, to make a meaningful contribution to local fish conservation efforts. The data is collected by visual underwater surveys. On natural reefs, Belt Transect Surveys are conducted; whereby this method is adjusted for the artificial reefs for better comparison. All the fishes observed on the transects are first identified to species level. Subsequently the species are divided into length-categories, which are used for the length-weight formula to estimate the weights of the individual fish. Fish counted (est.) Settlement tiles deployed Lux loggers installed Comparative evaluation of ecosystem services provided by artificial reefs in the Gulf of Thailand – Associated benthic communities, their succession and primary production Healthy coral reef ecosystems provide important ecosystem services to humans. They are priceless as a food source for millions of people contributing roughly 10% to the world’s fish catches. However, coral reefs are threatened from global- and local stressors, and already one third of the reefs have died off (Hughes et al., 2003). Over half of the remaining reefs are at risk and considered as threatened (Hughes et al., 2003). Artificial reefs are defined as “submerged structures placed on the seabed deliberately, to mimic some characteristics of natural reefs” (EARRN; see Jensen, 1997). Their traditional application has focused on enhancing fish abundance and total fish catch for recreational and commercial fisherman in coastal waters, but recent evaluations suggest that they can be utilized to compensate for structural damage to natural habitat and restore productivity where habitat is lost or destroyed (Pickering et al., 1998). The aim of this project is to study the recruitment and succession pattern, and the associated primary production, of two different artificial structures (concrete and metal) compared to natural reefs. Primary production is a key ecosystem service as it produces the energetic base of most food webs in the aquatic environment (Valiela, 1995; Chapin et al., 2011).The production at higher trophic levels will depend on the production at lower levels (bottom-up control). By assessing the role of ARs in enhancing primary production and energy transfer to the lower trophic levels of the benthic food web (Leitão 2013) we can determine the ecological implications of these reef structures. The recruitment and succession of organisms on the different structures will depend on the environmental conditions at each location as well as on the material used for the ARs. To be able to analyse these effects we will deploy a series of settlement tile structures (PVC frame with a mesh where terracotta tiles will be attached) that will act as new substrate for the recruitment of organisms. This will provide us with information on the effect of the environmental conditions. To be able to study the effect of the different materials (concrete and metal vs. natural reef) we will clear and monitor an area of 10x10 cm on each location. Background parameters such as temperature and light will be measured with the aid of loggers. Settlement tiles will periodically be removed from each location and analysed at the laboratory, where incubation experiments will be conducted to measure respiration and primary production. Artificial reefs (AR) are underwater structures used in coastal management that offer additional hard substrate for reef-building organisms (Bohnsack & Sutherland, 1985). They are deployed to offer a range of ecosystem services such as increasing fishery yields and local species diversity, supporting coastal protection and as attraction for divers and recreational fishers. Recently they have become of particular interest in areas where natural reef systems are shifting to algae dominated states as a measure of restoration and environmental mitigation (Seaman, 2002). As they mimic natural reefs they offer a food source and a potential shelter for reef fish. Their structure often works as a barrier against fishing nets protecting local stocks from overfishing (Baine, 2001). Recent research has hinted that the protection of herbivores fish is crucial as they play a key role in controlling the benthic community in the Gulf of Thailand (Stuhldreier, Bastian, Schoenig, & Wild, 2015). Bohnsack, J. A., & Sutherland, D. L. (1985). Artificial Reef Research – a Review with Recommendations for Future Priorities. Bulletin of Marine Science, 37(1), 11-39. Baine, M. (2001). Artificial reefs: a review of their design, application, management and performance. Ocean & Coastal Management, 44(3-4), 241-259. doi: Doi 10.1016/S0964-5691(01)00048-5 Seaman, W. (2002). Unifying trends and opportunities in global artificial reef research, including evaluation. ICES Journal of Marine Science, 59, S14-S16. doi: DOI 10.1006/jmsc.2002.1277 Stuhldreier, I., Bastian, P., Schoenig, E., & Wild, C. (2015). Effects of simulated eutrophication and overfishing on algae and invertebrate settlement in a coral reef of Koh Phangan, Gulf of Thailand. Mar Pollut Bull, 92(1-2), 35-44. doi: 10.1016/j.marpolbul.2015.01.007 “Ecosystem services are the direct and indirect contributions of ecosystems to human well-being. They support directly or indirectly our survival and quality of life. Ecosystem services can be categorized in four main types: Provisioning services are the products obtained from ecosystems such as food, fresh water, wood, fiber, genetic resources and medicines. Regulating services are defined as the benefits obtained from the regulation of ecosystem processes such as climate regulation, natural hazard regulation, water purification and waste management, pollination or pest control. Habitat services highlight the importance of ecosystems to provide habitat for migratory species and to maintain the viability of gene-pools. Cultural services include non-material benefits that people obtain from ecosystems such as spiritual enrichment, intellectual development, recreation and aesthetic values. Some examples of key services provided by ecosystems are described below: Climate regulation is one of the most important ecosystem services both globally and on a European scale. European ecosystems play a major role in climate regulation, since Europe’s terrestrial ecosystems represent a net carbon sink of some 7-12% of the 1995 human generated emissions of carbon. Peat soils contain the largest single store of carbon and Europe has large areas in its boreal and cool temperate zones. However, the climate regulating function of peatlands depends on land use and intensification (such as drainage and conversion to agriculture) and is likely to have profound impacts on the soil capacity to store carbon and on carbon emissions (great quantities of carbon are being emitted from drained peatlands). Water purification by ecosystems has a high importance for Europe, because of the heavy pressure on water from a relatively densely populated region. Both vegetation and soil organisms have profound impacts on water movements: vegetation is a major factor in controlling floods, water flows and quality; vegetation cover in upstream watersheds can affect quantity, quality and variability of water supply; soil micro-organisms are important in water purification; and soil invertebrates influence soil structure, decreasing surface runoff. Forests, wetlands and protected areas with dedicated management actions often provide clean water at a much lower cost than man-made substitutes like water treatment plants.” “Ocean productivity largely refers to the production of organic matter by “phytoplankton,” plants suspended in the ocean, most of which are single-celled. Phytoplankton are “photoautotrophs,” harvesting light to convert inorganic to organic carbon, and they supply this organic carbon to diverse “heterotrophs,” organisms that obtain their energy solely from the respiration of organic matter. Open ocean heterotrophs include bacteria as well as more complex single- and multi-celled “zooplankton” (floating animals), “nekton” (swimming organisms, including fish and marine mammals), and the “benthos” (the seafloor community of organisms). The many nested cycles of carbon associated with ocean productivity are revealed by the following definitions (Bender et al. 1987) (Figure 1). “Gross primary production” (GPP) refers to the total rate of organic carbon production by autotrophs, while “respiration” refers to the energy-yielding oxidation of organic carbon back to carbon dioxide. “Net primary production” (NPP) is GPP minus the autotrophs’ own rate of respiration; it is thus the rate at which the full metabolism of phytoplankton produces biomass. “Secondary production” (SP) typically refers to the growth rate of heterotrophic biomass. Only a small fraction of the organic matter ingested by heterotrophic organisms is used to grow, the majority being respired back to dissolved inorganic carbon and nutrients that can be reused by autotrophs. Therefore, SP in the ocean is small in comparison to NPP. Fisheries rely on SP; thus they depend on both NPP and the efficiency with which organic matter is transferred up the foodweb (i.e., the SP/NPP ratio). “Net ecosystem production” (NEP) is GPP minus the respiration by all organisms in the ecosystem. The value of NEP depends on the boundaries defined for the ecosystem. If one considers the sunlit surface ocean down to the 1% light level (the “euphotic zone“) over the course of an entire year, thenNEP is equivalent to the particulate organic carbon sinking into the dark ocean interior plus the dissolved organic carbon being circulated out of theeuphotic zone. In this case, NEP is also often referred to as “export production” (or “new production” (Dugdale & Goering 1967), as discussed below). In contrast, the NEP for the entire ocean, including its shallow sediments, is roughly equivalent to the slow burial of organic matter in the sediments minus the rate of organic matter entering from the continents.”
0.7779
FineWeb
```json [ "Ecosystem Services Provided by Artificial Reefs", "Artificial Reef Structures and Associated Fish Communities", "Comparative Evaluation of Benthic Communities and Primary Production" ] ```
Reduction capability of soil humic substances from the Rio Negro basin, Brazil, towards Hg(II) studied by a multimethod approach and principal component analysis (PCA) Data de publicação2007-03-15 Direito de acesso MetadadosExibir registro completo This paper characterizes humic substances (HS) extracted from soil samples collected in the Rio Negro basin in the state of Amazonas, Brazil, particularly investigating their reduction capabilities towards Hg(II) in order to elucidate potential mercury cycling/volatilization in this environment. For this reason, a multimethod approach was used, consisting of both instrumental methods (elemental analysis, EPR, solid-state NMR, FIA combined with cold-vapor AAS of Hg(0)) and statistical methods such as principal component analysis (PCA) and a central composite factorial planning method. The HS under study were divided into groups, complexing and reducing ones, owing to different distribution of their functionalities. The main functionalities (cor)related with reduction of Hg(II) were phenolic, carboxylic and amide groups, while the groups related with complexation of Hg(II) were ethers, hydroxyls, aldehydes and ketones. The HS extracted from floodable regions of the Rio Negro basin presented a greater capacity to retain (to complex, to adsorb physically and/or chemically) Hg(II), while nonfloodable regions showed a greater capacity to reduce Hg(II), indicating that HS extracted from different types of regions contribute in different ways to the biogeochemical mercury cycle in the basin of the mid-Rio Negro, AM, Brazil. (c) 2007 Published by Elsevier B.V.
0.8685
FineWeb
["Humic Substances", "Mercury Cycling", "Soil Analysis"]
The Moore's Law for cable networks: Every ten years the development of new technologies of information transfer requires a tenfold wider channel's bandwidth of a cable system. The Gigabit Ethernet answers the call from the users who need higher data transfer rates. It lets us avoid bottlenecks in the networks at the expense of a tenfold wider channel's band, and it is primarily used in main lines of networks, for connection of servers and so on. At present, its area of application is wider than just networks, backbones of networks, computing clusters and distributed data bases. For example, such applications as stream video, video conferences, processing of complex images require a higher bandwidth, and with too many users a 100Mbit network may not cope with an increased data flow. Hence a so intense interest to this technology and increased sales of 1Gbit adapters. It's also fostered by price cuts for such devices. The IEEE standard adopted in June 1998 was ratified as IEEE 802.3z. At that time they used only an optical cable as a data transfer medium. A year later the 802.3ab standard also set the unshielded twisted pair of the 5th category as such. The Gigabit Ethernet is an extension of the Ethernet and Fast Ethernet standards which made a "good showing" for their 25-year history. It's 10 times as fast as the former and 100 times as fast as the latter, and the throughput of the Gigabit Ethernet reaches 1000 Mbit/s which is approximately equal to 120MB/s, i.e. it's nearing the speed of the 33MHz 32bit PCI bus. That is why 1Gbit adapters are produced both for a 32bit PCI bus (33 and 64 MHz) and for a 64bit one. The technology is back compatible with the previous standards of Ethernet (10 and 100 Mbit), which allows for smooth switching of the current networks over to higher speeds, possibly even without a need to change the current cable structure. The standard inherits all previous features of the Ethernet such as a frame format, CSMA/CD technology (transfer-sensitive multiple access with collision detection), full duplex etc. New touches are primarily connected with high speeds. But the real advantage of the Gigabit Ethernet is that it's based on the old standards. It has an appearance of the good old Ethernet (being its extension), and high speeds were reached without considerable changes in the protocol. Hence the second advantage of the Gigabit Ethernet compared to the ATM and Fibre Channel, - its low price. In general, the Ethernet prevails in the sphere of network technologies making over 80% on the world data transfer market). And I suppose it will keep on dominating in this sphere as the 10 Gbit Ethernet is right over the corner. Originally the Gigabit Ethernet could use only optical cables as a data transfer medium - the 1000BASE-X interface. It was based on the standard of the physical level of the Fibre Channel (this is a technology of interaction of workstations, data storage devices and peripheral nodes). Taking into account that the technology had been approved earlier, such adoption reduced the time of development of the Gigabit Ethernet standard. The 1000BASE-X standard consists of three physical interfaces with the following characteristics: Table 1. Technical characteristics of the 1000Base-X standards The 1000BASE-T standard adopted a year later established the unshielded twisted pair of the fifth category as a transfer medium. The 1000BASE-T differs from the Fast Ethernet 100BASE-TX in four pairs used (the 100BASE-TX could use only two). Data rush at the speed of 250 Mbit/s in each of 4 pairs. The standard provides duplex transmission, and each pair works in the bidirectional mode. 1Gbit transmission in the twisted pair in the duplex mode turned out to be more difficult to organize than in case of the 100BASE-TX, because of noticeable pickups from neighboring three twisted pairs and required development of a special scrambled noise immune transmission and an intellectual node of signal detection and regeneration on reception. The 1000BASE-T standard uses the 5-level pulse-amplitude encryption PAM-5 as an encryption method. The cable must meet quite strict requirements (they are concerned with pickups of a unidirectional transmission, return pickups, delays and a phase shift). It gave birth to the 5e category for the unshielded twisted pair. The Gigabit Ethernet must be recertified to comply with the new category. Well, most proprietary cables do pass it, and at present the 5e category is produced in most quantities. Mounting of connectors for the 1000BASE-T: Fig 2. Mounting of connectors on a cable for the 1000BASE-T Interesting features were brought onto the MAC level of the 1000BASE-T standard. In the Ethernet networks the maximum distance between stations (collision domain) is defined according to the minimal frame size (in the Ethernet IEEE 802.3 standard it equals 64 bytes). The maximum segment length must let the bidirectional transmitter detect a collision before a frame transmission is completed (the signal must have time to reach the other segment end and return). So, for increasing transfer rates it's necessary either to make a frame size greater (thus increasing the minimal time of frame transfer) or to reduce a diameter of the collision domain. The second approach was taken for the Fast Ethernet. In the Gigabit Ethernet it was unacceptable because the standard inheriting such components of the Fast Ethernet as a minimal frame size, CSMA/CD and time slot will be able to work in collision domains not greater than 20 m in diameter. That is why they increased the time of transmission of the minimal frame. To provide compatibility with the previous Ethernet standard the minimal frame size was left 64 bytes and the frame got an additional field called carrier extension which added to 512 bytes. The field is certainly not added if the frame size is greater than 512 bytes. That is why the resulting minimal frame size is 512 bytes, the time slot increased and the segment diameter was raised up to 200 m (in case of the 1000BASE-T). Symbols in the carrier extension do not have sense, and a checksum is not calculated for them. This field is separated off at the MAC level at the reception stage, and higher levels work with minimal frames 64 bytes long. But although the carrier extension allowed preserving the compatibility with the previous standards, it also resulted in an unjustified waste of the bandwidth. The losses can reach 448 bytes (512-64) per frame in case of short frames. That is why the 1000BASE-T standard was modernized, and the conception of Packet Bursting introduced. It made possible to use the extension field more effectively. So, if an adapter or switch has several small frames to be sent, the first one will be transferred with the extension up to 512 bytes. And the following frames are transferred as is, with the minimal interval between them of 96 bits. The interframe interval is filled up with the carrier extension symbols. It works until the total size of transferred files reaches 1518 bytes. So, the medium doesn't "fall silent" while small frames are bering transferred, and a collision can occur only at the first stage, when the first small frame with the carrier extension (512 bytes) is delivered. This mechanism boosts up the network performance, especially in case of large loads, at the expense of reducing probability of collisions. But it was not enough. First the Gigabit Ethernet supported only standard frame sizes of the Ethernet, from minimal of 64 (extended up to 512) to maximum of 1518 bytes. 18 bytes are taken by the standard control heading, and data are thus left with 46 to 1500 bytes. But even a 1500 bytes packet is too small for a 1Gbit network, especially for servers transferring large amounts of data. Just take a look at some figures. To transfer a file of 1 Gbyte via an unloaded Fast Ethernet network the server processes 8200 packets/sec and it takes at least 11 seconds. In this case 10% (!) of time will be spent for interrupt processing for a computer of 200 MIPS because a central processor must work on (calculate a checksum, transfer data into the memory) every incoming packet. Table 2. Characteristics of transmission of the Ethernet networks. In 1Gbit networks the load on the processor rises approximately ten times because of the reduced time interval between the frames and interrupt requests addressing the processor. The table 1 shows that even in the best conditions (frames of the maximum size), frames are separated at most by 12 microcesonds. With smaller frames the time interval shrinks even more. That is why in 1Gbit networks the frame processing (by CPU) stage is its weakest point. And at the dawn of the Gigabit Ethernet the actual data rates were too far from the theoretical maximum level as the processors couldn't cope the load. The obvious wayouts are the following: At the moment both approaches are put to use. In 1999 it was suggested that a packet size be increased. Such packets are called Jumbo Frames and they can measure 1518 to 9018 bytes (equipment from some manufacturers support even greater Jumbo Frames. They reduce the processor's load as much as 6 times (in proportion to their size) thus boosting up the performance. For example, the maximum size of a Jumbo Frame of 9018 bytes reserves 9000 for data, beside a 18-byte heading, which corresponds to 6 standard maximum Ethernet frames. The performance gain is achieved not at the expense of losing several control headings (their traffic makes only several percents of the overall throughput), but at the expense of reduced time required for processing of such frame. Or, to be more precise, the time of frame processing remains the same, but instead of several large frames when each needs N processor cycles and one interrupt there is only one large frame to be processed. Besides, many modern network adapters use special hardware means to draw off a part of the load of traffic processing from the CPU. In case of a great traffic rate they use buffering - the CPU processes several frames one right after another. However, such niceties kick up the prices of such adapters. Today we will review three cards of the Gigabit Ethernet standard (they are all Desktop versions (not server ones). And all of them were tested on the PCI 33 MHz bus. So, the first card is LNIC-1000T from LG Electronics. The two-chip card uses microcontrollers from National Semiconductor Corporation. The card has 6(!) LEDs which inform about speeds of 10/100/1000 Mbit (3 LEDs), and on collisions, data transfer and a full-duplex mode (the other three). One of the card's controllers - DP83861VQM-3 - is hidden the Auto Negotiation mode of speeds and modes from the list above (IEEE 802.3u). The second controller - DP83820 - is a 10/100/1000 Mbit Ethernet controller that connects a network card to a PCI bus. Its features: The driver version is 184.108.40.206. It opens access to a wide range of settings of the card. During the tests we left default settings and changed only the size of the Jumbo Frame (1514, 4088, 9014 and 10,000 bytes). In the Linux OS the drivers didn't work, and we used the integrated National Semiconduct DP83820 driver ver.0.18. In this driver the size of the Jumbo Frame is limited by 8192. The next card is Hardlink HA-32G from MAS Elektronik AG. The network card is based on the same two controllers like the previous one. But it uses a different PCB and lacks a heatsink on the transceiver. There are three LEDs that inform on the speed of 10/100/1000 Mbit and data transfer in the blinking mode. The drivers were also used from the software bundle coming with the card. The driver's version in the Windows 2000 is also 220.127.116.11. But its interface offers much fewer settings than that of the previous card. All the settings remained default. The maximum size of the Jumbo Frame is unknown but the maximum size used in the tests (16128 bytes) could be set without problems. In the Linux OS the drivers supplied with the card refused to work with the default settings as well, and I turned to the OS's driver (the same as in the previous case). But there was an interesting problem. The cards couldn't detect the link in the Linux-Linux bundle. The network interfaces are raised up but the link is not established. When we rebooted one of the computers in the Windows 2000 the link appeared both with the straight-through and cross cables. The situation repeated with the 2.4.17, .18 and .19 cores. That is why the test results for this adapter in the Linux are not given. The last card we have for today is Intel PRO/1000 MT Desktop from Intel Corporation. This is a one-chip solution (newer generation) build on the Intel 82540EPB microcontroller which fulfills functions of both chips of the above cards. The card has two LEDs one of them showing link and data transfer and the other (two-color) speed of 10/100/1000 Mbit. Here are some of the microcontroller's parameters: The drivers for both OSs were taken from the company's site. The driver's version in the Windows 2000 is 18.104.22.168, and it provides wide configuration settings for the adapter. Nevertheless, we left the default settings and changed only the Jumbo Frame size (4088, 9014, 16128 bytes). Linux driver, ver.4.3.15 worked flawlessly. Two computers used in the tests consist of the following components: In the Windows 2000 for traffic generation and measurements we used Iperf 1.2 and NTttcp programs from the Windows 2000 DDK. Both programs were used to measure data rates and CPU utilization at the following Jumbo Frame sizes: Startup options of the Iperf: Startup options of the NTttcp: Startup options of the Iperf: Startup options of the NTttcp: In the Linux OS for traffic generation and measurements we used the netPIPE 2.4. The program generates a traffic with a gradually growing size of the data packet (a packet of the size N is transferred several times, the number of transmissions is inversly proportional to its size, but not less than 7). Such method shows the percentage of the channel utilization depending on the size of data transferred. The size of the Jumbo Frame was changed by changing the MTU in the settings of the network interface by command receiver: NTtcp -b 65535 -o logfile -P -r transmitter: NTtcp -b 65535 -o logfile -P -t 1. Windows 2000, transfer speed. Strangely enough but the speed doesn't exceed 600 Mbit/s even with the Jambo frame of 16128 bytes. It's not clear where is a bottleneck as the processor is not loaded entirely anyway. It's also well seen that the Intel's adapter gets the smallest gain from the increased frame size - the speed goes up inconsiderably though the CPU' load falls down. The Intel PRO/1000 MT adapter loads up the processor least of all, it is twice better than its rivals because it's based on a newer controller. And with the disabled Jumbo Frame the Intel regains the crown in speed, and with the jumbo frames enabled the crown is handed in to the LG LNIC-1000T. Why the adapter falls behind the Intel's model is not clear - the problem is probably hidden in the driver. 3. Linux, MTU size. 4. Linux, performance comparison with the equal MTU size. And here is a diagram where you can compare the peak performance of these two adapters at different MTU sizes. By the way, the resultant speed in the Linux is higher than in the Windows. The tests show that the Jumbo Frames boost up the performance marginally, so it's better to use them in operation. They certainly must be supported both by a network adapter and a switch, but unfortunately not all of them provide such support. The Intel PRO 1000/MT adapter becomes un unequaled leader with the Jumbo Frames, and it also provides the lowest CPU's load. Remember that this is mostly on the account of a newer controller. The issue of a low speed (not more than 600 Mbit) of all adapters remains open. And the next test may clear the things up. Evgeniy Zaitsev ([email protected]) Write a comment below. No registration needed!
0.5685
FineWeb
["Gigabit Ethernet", "Network Performance", "Jumbo Frames"]
Thirty years after winning their basketball final in 1978, five friends are reunited when they return to their hometown for the funeral of their old coach. Lenny (Adam Sandler) is a wealthy Hollywood agent with three spoiled children and a beautiful wife, Roxanne (Salma Hayek). She’s a world class fashion designer. Eric (Kevin James) is also married with children, but is on a downwards spiral. Kurt (Chris Rock) is a house husband who doesn’t feel appreciated by his wife or children. Marcus (David Spade) is still behaving like a teenager. Rob (Rob Schneider) is on his third marriage to a woman old enough to be his mother. The five men and their families go on to spend the Fourth of July weekend together at a lakeside lodge. This was a favourite place when they were growing up. Their aim is spread Coach Buzzer’s ashes. During the weekend, the five men rediscover their friendship and undertake a lot of soul-searching. Their children discover just how much fun life can be life without mobile phones and video games. The men are also challenged to a basketball rematch with the boys – now men – they defeated in 1978. Here we outline any topics, issues and ideas in this movie that might upset children and adolescents, so that you can gauge whether it is appropriate for your child. For example, children and adolescents may react adversely to themes of crime, suicide, drug and alcohol dependence, death, serious illness, family breakdown, separation from a parent, animal cruelty or distress, children as victims, natural disasters and racism. Family relationships; mid-life experiences Here we identify any violence in this movie, and explain how and why it might impact on your child or adolescent. In general, movie violence can make children less sensitive to the use of violence in real life. Alternatively, they may become fearful about the prevalence and likelihood of violence in their own world. In some contexts, it can also teach them to see violence as an acceptable means of conflict resolution. Grown Ups contains infrequent slapstick violence and accidental harm. It also contains frequent psychological violence in the form of demeaning insults and ridicule. For example: - During a basketball match, a young boy bumps into and knocks over another young boy. - A young boy plays a video game called Cruise Ship Wars. The aim of the game is to run around a cruise ship cutting off people’s heads with a chain saw. You get extra points for tipping grandmother over the ship’s rail. Images from the game show blood splattering across the screen and people being shot. - A man picks up a heated rock used for therapy and screams in pain. He places it on a woman’s back. She screams. - A preschool-aged girl gets upset when she sees moths being zapped by an electric bug zapper. A woman steps on a dead moth and squashes its body into the ground. - When a young girl sees a rope on a tree, she says, ‘We get to hang ourselves’. - Lenny slaps Rob across the face (several times) with a piece of dried banana and then with a packet of bacon. - In reference to Lenny’s son’s misbehaviour, Lenny says that his father would have given him a backhand to the face with possible bleeding from the ear. - Five men play a game called Arrow Roulette. It involves a man firing an arrow into the air. The other men run away before the arrow comes down, and the last man to run away is the winner. As the men run away, one runs into a tree branch and is knocked off his feet. Another trips over a tree stump and lands face first in animal poo. Another refuses to move and has his foot impaled by the arrow. We see the arrow impaled through the man’s foot and blood around the wound. The man faints when he sees the arrow through his foot. He’s carried off by two men, and his foot is bandaged. - Rob is shot in the foot with an arrow. His wife suggests a poultice. Rob tells her to ‘Get some alcohol on this’, and then calls her a rude name. He says a lot of insulting things about her age, looks and abilities. - Throughout the movie, Lenny and his friends keep insulting each other about appearance, weight, ethnicity, lifestyles, eating habits, children, monetary status, sexual relations, age, physical abilities and deformities. For example, one person is described as an ‘Elvis Oompa Loompa’, whose hair looks like a dirty cue tip. - A man tries to ride a flying fox while hanging upside down. He crashes heavily through the side of a wooden building. He ends up in a full-body plaster cast. - Rob kicks Marcus hard in the groin, twice causing Marcus to collapse on the ground in agony. - A drunk man shoots an arrow high into the air at a crowded outdoor basketball game. The crowd shouts and runs away. The falling arrow impales the foot of a man who is wearing a full-body plaster cast, and he can’t run away. He falls backwards unconscious. Content that may disturb children In addition to the violent scenes mentioned above, this movie contains some scenes that could scare or disturb children under eight. For example, an elderly woman shows off her swollen infected-looking bunions. A young child screams and runs away at the sight. Apart from the violent scenes mentioned above, there are no scenes in this movie that are likely to scare or disturb children in this age group. This movie is unlikely to scare or disturb children in this age group. Grown Ups contains frequent sexual references, innuendo and double entendres. For example: - While playing in a basketball match, an 11-year-old boy winks suggestively at a young girl watching in the stands. Another young boy does the same thing. When the girl nods her head in response, the boy shakes his head and nods his head at the girl’s mother. - A man makes a rude remark about the sexual preferences of another man’s wife. - A woman asks a man if he was the one who didn’t reach puberty until 19. - An 11-year-old boy tells another that he is going to Italy, where they show breasts on TV. - Five men discuss whether you can have sex with a woman when she is pregnant. - A man talks about having sex with his wife while she was asleep. - A man tells his young child that babies come from his mummy’s ‘poop’. We hear how the child kept looking for a brother in the toilet. - A man makes a lewd remark about a woman using a breast pump. - A group of woman talks about a man’s tight-fitting swimsuit as a ‘banana hammock’. - A man has a sexually suggestive conversation with his wife while they’re dancing. - A man talks about having sex with a young woman. - When a man complains about his family not appreciating him, a woman makes an insulting remark about it being ‘his time of the month’. - A man makes an insulting remark about the way another man and his partner have sex. The movie also contains a lot of ‘toilet humour’ about poos, wees and farts. There are also several scenes involving breast feeding. For example, a four-year-old boy says to his mother, ‘I want some milk’. His mother allows the child to drink from her breast. (The boy’s head obscures her breast.) Other mothers with shocked expressions cover the eyes of their children. In a later scene, the boy looks at another woman’s exposed cleavage and asks his mother if he could have some of her milk. Alcohol, drugs and other substances This movie contains some use of substances. For example: - A man behaves in a drunk way at his young son’s basketball celebration. His son says, ‘You’re drunk, dad’. - Some men talk about how they used to go to a lakeside lodge to get ‘wasted’. Their children overhear and ask what ‘getting wasted’ means. The parents say it means ‘eating ice-cream’. One young child says that he wants to get wasted every day of his life. - Men drink beer out of bottles a couple of times. - Several times, Marcus drinks shots of bourbon and gets so drunk that he doesn’t know what he’s doing. He dances while holding a bottle of bourbon, acting as if the bottle is his dance partner. While at a crowded public gathering, he fires an arrow into the air. Nudity and sexual activity This movie contains some nudity and sexual activity. For example: - Lenny reaches behind Rob and sticks his finger into Rob’s clothed buttocks. - Rob and his much older wife kiss each other on the lips and touch tongues. Two men watching the pair pretend to gag. In a later scene, Rob straddles wife and the pair rub noses. - One scene shows a back view of Marcus’s naked buttocks. - A young woman in very short shorts bends over a car. The camera focuses on her buttocks and crotch from behind. Four men in deck chairs deliberately stare at the woman as she bends over. - At Water World, a group of women stares at a bare-chested muscular man wearing small, tight shorts. The man flexes his muscles to attract the women’s attention. One of the women tells the others, ‘There are two big advantages to breastfeeding’. She then does a seductive pole dance around an umbrella to attract the man’s attention. - After sliding down a waterslide, a young woman gets out of the water. The camera focuses on her buttocks, and her bikini looks like a g-string. Two young boys stare at her. - A man asks a pregnant woman if he can touch her belly. He says he likes to touch the baby. He touches the woman’s breast by ‘mistake’. The following products are displayed or used in this movie: video games, hand-held video machines, Wii, Cadillac, Budweiser beer, KFC, bottled water (Voss) and mobile phones. This movie contains some coarse language, modified swear words and putdowns. Ideas to discuss with your children Grown Ups is a comedy that relies almost solely on crude humour, sexual reference or innuendo, and demeaning insults to entertain. The main positive messages in the movie are: - Childhood is a time for imaginative play and physical activity. It shouldn’t be spent on violent video games and texting. - Life can be difficult, but it’s the bumps in life that give it depth. - In the words of Coach Buzzer, ‘Live life with no regrets’. Unfortunately, you might find these messages are overshadowed by the crude humour and the way in which characters treat each other. You might like to talk with your children about: - the disrespectful way that the main characters talk about and view women - the harmful and demeaning way that the men ridicule each other.
0.9326
FineWeb
```json [ "Family relationships and mid-life experiences", "Violence and crude humor", "Sexual references and innuendo" ] ```
Trying to combine rural development with conservation in developing countries cannot make the world’s rural poor substantially better off, or protect biodiversity, argues UC Davis professor Truman Young in a recent article. In a recent issue of the Journal of International Wildlife Law and Policy, Young argues that the countryside cannot sustain current populations except at poverty levels. Efforts to decrease poverty and restore natural habitats should recognize that human populations have been moving from rural to urban areas on a massive scale, and focus on giving immigrants the skills to thrive in cities, he says. Young focuses on Brazil’s Amazon Basin and Kenya’s Maasai Mara National Reserve as examples of the limits of rural development. Both these areas are rich in natural biodiversity and generally poor in the nutrients or rainfall needed to sustain profitable agricultural communities. Using development/conservation policies to help increase agricultural production or resource extraction such as timber or rubber may substantially raise family incomes in the short term, but will never do so sufficiently to lift them fully out of poverty, Young said. “We need to recognize that for many living in rural areas, there is a strong attraction, especially in the younger generation, to the cities,” he said. “This trend has been growing steadily, and the rural populations of many tropical countries are already declining. Given this phenomenon, the best long-term strategy may be to assist the migrant rural communities by giving them education to assist in this migration, rather than enticing them to stay on unprofitable lands.” In addition, land left abandoned as people migrate to the cities can be made available for natural biodiversity. His paper points out cases of farmlands in the Amazon that were transformed into secondary rainforest after they had been abandoned. He is quick to point out that the movement to the cities is not an economic panacea; many urban centers are poor, unhealthy and dangerous places. Nor is he advocating the forcible removal of populations from their rural homes as a way to restore biodiversity. Instead, he urges conservationists to recognize urban migration as an inevitable long-term human current, and to refocus their efforts on making the transition as smooth and environmentally friendly as possible.
0.627
FineWeb
```json [ "Rural Development", "Conservation", "Urban Migration" ] ```
The Supreme Court is the final judge in all cases involving laws of Congress, and the highest law of all — the Constitution. The Supreme Court, however, is far from all-powerful. Its power is limited by the other two branches of government. The President nominates justices to the court. hope so this may help u
0.8954
FineWeb
["Supreme Court", "Congress", "Constitution"]
(Not to be confused with topic WebBrowserMissingWidgetWorkArounds. Notice "Widget" versus "Wget".) Sometimes, the web browser is missing something that is in wget, and this leads to horrendous work-arounds. First and most obvious example: most web browsers cannot resume an interrupted download where it left off. Really annoying for that 100 megabyte file over dialup, or that 4gb DVD ISO over an even slightly flaky DSL. Any UNIX wizard worth his salt these days would fire up wget (which retries and use wget -c to resume if the retry fails) to get the file. The user's browser can't do that. So what do commercial sites that provide large binaries do: they provide a grappling-hook style .exe that pulls the large file, resuming as needed. This conveniently solves the problem for most users, but what's your UNIX user to do if he needs the large file (they probably provide no link to the real file)? Slightly offtopic, but do those auto-download things ever annoy you when your intent is to copy the url and paste it into an xterm ssh'd into a remote machine that wants the file (will use wget to retrieve url)? Yes, they're *intensely* annoying, especially since many of them require some kind of negotiation with a browser such that wget fails, so I need to use a command line browser such as Lynx to do the download. When I run into such a site I just take the hint that this company and its products are useless to me and look elsewhere -AnonymousDonor
0.5026
FineWeb
["Web Browser Limitations", "Wget Capabilities", "Download Workarounds"]
Swipe to navigate through the chapters of this book Starting from the problem to define the tangent to the graph of a function, we introduce the derivative of a function. Two points on the graph can always be joined by a secant, which is a good model for the tangent whenever these points are close to each other. In a limiting process, the secant (discrete model) is replaced by the tangent (continuous model). Differential calculus, which is based on this limiting process, has become one of the most important building blocks of mathematical modelling. Please log in to get access to this content - The Derivative of a Function - Springer International Publishing - Sequence number - Chapter number - Chapter 7
0.694
FineWeb
["Derivative of a Function", "Differential Calculus", "Mathematical Modelling"]
This article is Almost Ready. The desc element (<desc>) provides a human readable description for container elements and graphics elements. In the following code example, the desc element is used to define a description of an element. This element can be read programatically to analyze SVG structure. Copy this sample to a text file and save it with the .html file extension. Run it in Internet Explorer 9 to see a greenyellow ellipse. The element will look like this: <!DOCTYPE HTML> <html> <head></head> <body> <svg width="400" height="400"> <ellipse cx="150" cy="100" rx="100" ry="75" fill="greenyellow"> <desc>This is the description of an ellipse.</desc> </ellipse> </svg> </body> </html> Note: In addition to the attributes, properties, events, methods, and styles listed above, SVG elements also inherent core HTML attributes, properties, events, methods, and styles. - Scalable Vector Graphics: Document Structure, Section 5.11.5 The SVGDescElement object has these events: - onload: Occurs when the browser has fully parsed the element and all of its descendants. The SVGDescElement object has these properties: - className: Gets the names of the classes that are assigned to this object. - focusable: Determines if an element can acquire keyboard focus (that is, receive keyboard events) and be a target for field-to-field navigation actions (such as when a user presses the Tab key). - ownerSVGElement: Gets the nearest ancestor svg element. - style: Gets a style object. - viewportElement: Gets the element that established the current viewport. - xmlbase: Gets or sets the base attribute on the element. - xmllang: Gets or sets a value that specifies the language that is used in the contents and attribute values of an element. - xmlspace: Gets or sets a value that indicates whether white space is preserved in character data. |SVG 1.1||W3C Recommendation| |SVG Tiny 1.2||W3C Recommendation| This article contains content originally from external sources. Portions of this content come from the Microsoft Developer Network: [Windows Internet Explorer API reference Article]
0.9769
FineWeb
```json [ "SVG Elements", "SVGDescElement Object", "Scalable Vector Graphics" ] ```
Using Bonds For Passive Income Continuing in the whole theme of finding passive income investments, let’s take a look at another asset class that belongs in the portfolio of all investors. This asset class typically performs well when stocks are performing poorly. While blue chips stocks are a winner for dividend investing, bonds offer more price stability and a guaranteed rate of return. Let’s take a look at investment grade corporate bonds. What is a bond? A bond is a debt security in which an individual investor lends money to an organization. Bonds have a number of unique differences from stocks. Stocks represent ownership in a company whereas a bond is an obligation of a company. Bondholders have no rights to the equity of a company but they do have a claim against the assets of a company. For example, a company that chooses to go out of business has to satisfy its obligations to its bondholders. Bond interest is a guaranteed payment and it is made semiannually. Every 6 months an investor receives an interest payment. These payments will continue until the bond’s maturity date. The maturity date is when the bond is redeemed and the investor receives their principal back. A bond’s maturity could be as short as 1 year or as long as 30 years. The interest rate paid depends heavily upon the credit quality of the issuer and the length of time of the bond. The interest rate environment matters as well. What are the types of bonds? Savings Bonds & Treasury Bonds Savings bonds and Treasury bonds are guaranteed investment. They are bonds issued and guaranteed by the government. These are bonds in which there is extremely limited risk of default. This will only occur if the government folds or does not raise its debt ceilings to address interest payments. These bonds are virtually risk free considering that the government has not defaulted yet. Municipal bonds are government bonds as well that are issued by state and local governments. These bonds are often used to fund highways, bridges, tunnels, roads, and other local projects. These bonds are tax advantaged since they are exempt from state taxes. Municipal bond investing bears the risk of default by the municipality. Corporate bonds are bonds that are issued by companies. These bonds pay the highest interest rates since they are tied to the fortunes of individual company. A stable company like Berkshire Hathaway has to pay a much lower interest rate on bonds than an unstable company like Rite Aid. Corporate bonds are sold in blocks of $1,000. Risk of bond investing The risk of default does exist when buying a bond. A company in financial peril has a higher risk of default. For example, a bankruptcy proceeding could be used to discharge the amount of money owed to bondholders. Bondholders would have to get in line behind more senior debtholders. Getting Passive Income From Bond Investing Bonds are a great source of passive income because investors earn income from the activities of another entity. The income stream pays a greater return than savings accounts and certificates of deposit. Right now, investors can safely get a 4 to 5% yield on very safe bonds. Even higher rates are available for investors with greater risk appetites. Bond ETF’s and bond funds are an option for investors that would like to get access to the bond market without buying individual securities,
0.8321
FineWeb
```json [ "Introduction to Bonds", "Types of Bonds", "Risk of Bond Investing" ] ```
- Historical event: - 2 August 1944 - Convoy HX-300 consisted of as many as 166 merchant ships, the largest of which had around 10,000 register tons. On this day in 1944, the largest merchant ship convoy in World War II crossed the Atlantic Ocean and entered British waters. The convoy bore the designation HX-300 because it was the 300th convoy originating from the Canadian city of Halifax. The convoy consisted of as many as 166 merchant ships, the largest of which had around 10,000 register tons. In World War II, the Allies used convoys because they could be better protected from submarine attacks than individual ships. The ships of convoy HX-300 were arranged in 19 parallel columns, making a formation around 14 km wide and 6.4 km long. HX-300 was escorted by around 30 warships, including anti-submarine warfare ships and minesweepers. The convoy wasn’t attacked and sailed safely into British waters.
0.5647
FineWeb
["World War II", "Convoy HX-300", "Merchant Ship Convoys"]
1. Design and pupils to consider design problems (usually the problems other people 2. Pupils develop a range of practical skills associated with modern industry. 3. Pupils develop an understanding of drawing as a method of communication. 4. Working as a team to solve design problems and to manufacture is not only key to success in Design and Technology but also in industry, business and commerce. 5. Pupils develop an understanding of aesthetics and its role in the design of everyday items and architecture. 6. Pupils learn about functionality in 7. Pupils develop practical skills that aid them in everyday life. 8. Pupils learn to consider people with individual needs. 9. Research introduces pupils to the technology of other cultures from an historical and modern perspective. 10. Ecology and the environment are serious considerations to any design and technology student. 11. Pupils learn the importance of economics when costing projects. 12. Consideration is given to the role of designers in history and the modern world. 13. The design process is central to project work as a method of problem solving. 14. Pupils develop communication skills through designing and group work. 15. Design and Technology provides a constructive channel for a child’s creative needs. 16. Design and Technology directly supports manufacturing industry by providing this sector of the economy with capable technologists. 17. Design and Technology provides a framework for learning and formulating ideas.
0.5593
FineWeb
["Design Principles", "Design Process", "Design Skills"]
Our school carries on the tradition of the Abbie Loveland Tuller School both in location and the approach to our education, a legacy spanning over 50 years. In the spirit of the Tuller method, The ABBIE School has adopted a highly individual approach to the College and Career Ready Standards, utilizing EngageNY, coupled with a personalized approach to each scholar's learning that can only be achieved in the intimate and carefully crafted environment that ABBIE can produce for each child. We Can Help The ABBIE School fosters wonder, study, and discovery in an atmosphere of inclusion, respect and kindness. Children with individual learning challenges as well as accelerated and twice-exceptional students excel in our unique family environment. Our bully-free and gentle culture allows your child a safe place to be him or herself. Watch our videos and learn why families choose ABBIE!
0.9562
FineWeb
```json [ "Our School Tradition", "Personalized Education Approach", "Inclusive Learning Environment" ] ```
No ETA at the moment on a release but I am working on releasing larger, more stable update to the game. Just a snippet of what I am working on: - Weapon quality that will determine the stats on each part, such as weapon damage. - A physical shop where you can purchase parts and resources for cheap. - The smelter will be accessible in the Shop mode along with mining. - Difficulty settings. - A cave to explore. .. and much more.
0.8049
FineWeb
["Game Updates", "New Features", "Future Releases"]
a better version of this current document in PDF is available HERE U.S. Department of Housing and Urban Development Office of Policy Development and Research U.S. Department of Housing and Urban Development Office of Policy Development and Research NAHB Research Center Upper Marlboro, MD Instrument No. DU100K000005897 The U.S. Government does not endorse products or manufacturers. Trade or manufacturers' names appear herein solely because they are considered essential to the object of this report. The contents of this report are the views of the contractor and do not necessarily reflect the views or policies of the U.S. Department of Housing and Urban Development or the U.S. Government. This report was prepared by the NAHB Research Center for the U.S. Department of Housing and Urban Development. Special appreciation is extended to NAHB members Bill Eich, Gerald Eid, Tim Duff, Jess Hall and their staffs, for constructing the five demonstration homes; Richard Morris, National Association of Home Builders, for initiating the technology transfer into the United States; Peter Steurer, National Oceanic and Atmospheric Administration, for providing and verifying proper methods of estimating long-term air-freezing indexes; Dow Chemical Company, Mid-America Molders, U.C. Industries, Inc., and Amoco Foam Products, for donation of insulation; and William Freeborne, U.S. Department of Housing and Urban Development, for guidance and review throughout the demonstration, testing, and development of this design guide for frost protection of shallow foundations. In addition, the NAHB Research Center gives special recognition to The Society of the Plastics Industry, Inc., for several years of support that has helped to re-introduce this technology to the United States. Their sponsorship has promoted activities such as technology transfer from Europe, finite element computer modelling of heat transfer through foundations, development of design procedures for frost protection of foundations, formulation of important U.S. climate data for use in the design procedure, and initiating an interest in this technology at the level of the major U.S. building codes. These previous activities have contributed tremendously to this design document and development project. The NAHB Research Center staff responsible for this document are: Jay H. Crandell, P.E., principal investigator Eric M. Lund, technical assistance Mike G. Bruen, P.E., technical assistance Mark S. Nowak, reviewer List of Tables and Figuresvi FPSF DESIGN PROCEDURE5 RECOMMENDED CONSTRUCTION METHODS AND DETAILS13 COMMON QUESTIONS AND ANSWERS ABOUT FPSFS21 FPSF DESIGN EXAMPLES23 APPENDIX: DESIGN DATAA-1 Table 1. Classification of Building Based on Indoor Air Temperature5 Table 2. Minimum Insulation Requirements for Frost Protected Footings With Heated Buildings6 Figure 1. Schematic of FPSF and Conventional Foundation Systems1 Figure 2. Frost Penetration into the Ground Under Various Conditions2 Figure 3. FPSF Heat Flow Diagram for a Heated Building with Optional Floor Figure 4. FPSF Design Parameters for Heated Buildings Using the Simplified Figure 5. FPSF Design Parameters for Heated Buildings8 Figure 6. FPSF Design Parameters for Unheated Buildings10 Figure 7. Illustrations of Cold Bridges13 Figure 8. Typical FPSF Monolithic Slab in Mild Climate16 Figure 9. Typical FPSF Monolithic Slab with Horizontal Wing Insulation for Colder Figure 10. Independent Block Stem Wall with Insulation on Exterior Face of Wall17 Figure 11. Independent Stem Wall with Insulation on Exterior Face of Wall18 Figure 12. Typical Permanent Wood FPSF18 Figure 13. FPSF Design for Unheated Space with Independent Slab and Stem Wall19 Figure 14. FPSF Design for Unheated Buildings With Insulation in Single Plane19 Figure 15. Insulation Detail for Small Unheated Portion in an Otherwise Heated Figure 16. Insulation Detail for a Large Unheated area (e.g., Attached Garage)20 Figure 17. Design detail for example A24 Figure 18. Design detail for example B26 Figure 19. Design detail for example C28 A frost protected shallow foundation (FPSF) is a practical alternative to deeper, more-costly foundations in cold regions with seasonal ground freezing and the potential for frost heave. Figure 1 shows an FPSF and a conventional foundation. An FPSF incorporates strategically placed insulation to raise the frost depth around a building, thereby allowing foundation depths as shallow as 16 inches, even in the most severe climates. The most extensive use has been in the Nordic countries, where over one million FPSF homes have been constructed successfully over the last 40 years. The FPSF is considered standard practice for residential buildings in Scandinavia. The objective of this design guide is to assist U.S. builders, designers, code officials, and others in employing the technology. In northern U.S. climates, builders mitigate the effects of frost heave by constructing homes with basements, slabs, or crawlspaces with perimeter footings below the frost line. Other construction methods include: %Piles or caissons extending below the frost-line %Mat or reinforced structural slab foundations to resist heave %Non-frost susceptible fills and drainage %Adjustable foundation supports The FPSF allows builders to construct a structurally sound foundation at a lower cost than associated with these practices. Although this document is limited to slab foundations, the technology may also be used with crawlspace construction when ventilation is properly controlled. In addition to substantial initial cost savings, FPSFs provide an opportunity for increased energy savings because of their insulation requirements. These minimum insulation requirements for frost protection generally exceed existing energy code requirements (i.e., the CABO Model Energy Code) for foundation insulation. The frost protected shallow foundation technology recognizes the thermal interaction of building foundations with the ground. Heat input to the ground from buildings effectively raises the frost depth at the perimeter of the foundation. This effect and other conditions that regulate frost penetration into the ground are illustrated in Figure 2. It is important to note that the frost line rises near a foundation if the building is heated. This effect is magnified when insulation is strategically placed around the foundation. The FPSF also works on an unheated building by conserving geothermal heat below the building. Unheated areas of homes such as garages may be constructed in this manner. Figure 3 illustrates the heat exchange process in an FPSF, which results in a higher frost depth around the building. The insulation around the foundation perimeter conserves and redirects heat loss through the slab toward the soil below the foundation. Geothermal heat from the underlying ground also helps to raise the frost depth around the building. FPSFs are most suitable for slab-on-grade homes on sites with moderate to low sloping grades. The method may, however, be used effectively with walk-out basements by insulating the foundation on the downhill side of the house, thus eliminating the need for a stepped footing. FPSFs are also useful for remodeling projects in part because they minimize site disturbance. In addition to residential, commercial, and agricultural buildings, the technology has been applied to highways, dams, underground utilities, railroads, and earth embankments. FPSFs have seen notable technological development and increased application in Europe over the past 40 years. Their most practical advancements in the understanding and application have come from the Nordic countries. In the United States, slab-on-grade houses were constructed around the turn of the century in cold climates near Chicago. During the Depression, Frank Lloyd Wright designed and built a type of FPSF to meet affordability needs and used FPSF techniques in his "Usonian" style homes with shallow slab-on-grade foundations. In the 1950s, Swedish and Norwegian researchers constructed the first experimental houses using insulated shallow foundations. These demonstration sites provided practical experience and empirical data on the FPSF technology. By 1972, nearly 50,000 slab-on-grade foundations had been built in Sweden, and the FPSF technology had gained wide acceptance. The Norwegians also recognized the need to address design recommendations for unheated portions of buildings such as air-lock entries and exterior stairways. In the 1970s, the Scandinavian nations consolidated their research efforts in an attempt to address the FPSF technology. In 1972, the Royal Norwegian Council for Scientific and Industrial Research appropriated the equivalent of $10-million for the compilation and advancement of the Scandinavian work to date. The effort led to the 1976 publication, Frost I Jord ("Frost Action in the Ground"). Scandinavian engineers consider Frost I Jord a reliable guide for design against frost action in soils. Based on the results of the Frost I Jord Project, the Norwegian Building Research Institute started publication in 1978 of "Building Details" related to FPSF design and construction. In the United States, FPSF technology has been used in engineered structures and is common practice for residential construction on crawlspace foundations in areas of Alaska. However, the major model building codes do not specifically recognize the FPSFs equivalence to footings placed below a prescribed frost depth. Recent amendments to the CABO One- and Two-Family Dwelling Code and the BOCA National Building Code recognize performance-based criteria for frost protection but do not specifically mention FPSFs. Consequently, widespread use of the technology in residential buildings has not yet occurred. Efforts are underway to obtain U.S. building code approval. This document, based on the European standards, presents an FPSF design procedure for slab-on-grade buildings, demonstrates the procedure with design examples, answers common questions about FPSFs, and recommends specific construction methods and details. Additionally, a simplified design method is presented in a form suitable for adoption by the major model building codes. This procedure addresses the design of frost-protected shallow foundations which use insulation to prevent frost heave in cold climates. It is specific to slab-on-grade foundations on residential homes but may also be used on commercial and agricultural structures. This design procedure does not apply to buildings on permafrost, to areas with mean annual temperatures less than 32oF (0oC), or to crawl-space construction. The proper specification of insulation products is paramount to the success of an FPSF application. Few insulation products are able to maintain a dry R-value in a moist, below-ground environment over any great length of time. Insulation products specified for an FPSF must be rated with an effective R-value that can be maintained in such an environment for the expected life of the structure. Because some insulation materials resist water absorption less effectively than others, which in turn degrades their thermal resistance (R-values), insulation material should be specified carefully. The reader is directed to references listed in the bibliography for additional information on this issue. Polystyrene insulation for below-ground, frost-protection applications in the U.S. must comply with the only available U.S. standard, ASTM C 578-92 for Rigid, Cellular Polystyrene Thermal Insulation. Although this standard does not provide for adjustment of thermal resistance in potentially moist, below-ground conditions, appropriate adjustment factors for EPS and XPS have been determined based on international knowledge and experience. The reader is referred to manufacturers for product-specific information. This design procedure specifies insulation and foundation depths which ensure protection against frost heave damage for all types of soils. The procedure is conservative in that it assumes a 100-year return winter and no insulating ground vegetative or snow cover. The designer assumes responsibility for compliance with all local building and energy codes. This document addresses heated, unheated, and semi-heated structures as based on the expected average indoor monthly temperature range of the building from Table 1. Table 1. Classification of building based on indoor air temperature, T. This design guide contains two approaches: a simplified design and a detailed design. The simplified method streamlines the design process of FPSFs for heated buildings. The procedure is in a form suitable for adoption by the major model building codes and is intended to promote widespread acceptance and use of the technology. In consolidating the design steps for the simplified method, R-values for the vertical insulation were established so that the performance level of various conditions, including slab surface temperatures, were conservatively accommodated. Therefore, more economical construction costs may be obtained when the detailed design procedure is followed. The detailed design procedure must be used when buildings include unheated areas such as attached garages. SIMPLIFIED FPSF DESIGN METHOD To use the simplified approach, the air freezing index (AFI) for the site location must be known. An AFI contour map is provided in the Appendix to assist in establishing this value. Insulation R-value and dimensions, and the depth of the footings are then determined from Table 2 and Figure 4. Horizontal insulation shall be bedded firmly on smooth ground or granular base. When foundation depths greater than 12 inches are required by Table 2, the increase in depth may be satisfied by substituting compacted gravel, crushed rock, sand, or approved non-frost susceptible materials. Table 2. MINIMUM INSULATION REQUIREMENTS 1 Insulation requirements are for protection against frost damage in heated buildings. Greater values may be required to meet energy conservation standards. Interpolation between values is permissible. 2 See Appendix for Air Freezing Index values. 3 Insulation materials shall provide the stated minimum R-values under long-term exposure to moist, below-ground conditions in freezing climates. The following R-values shall be used to determine insulation thicknesses required for this application: Type II expanded polystyrene - 2.4R per inch; Types IV, V, VI, VII extruded polystyrene - 4.5R per inch; Type IX expanded polystyrene - 3.2R per inch. NR indicates that insulation is not required. 4 Vertical insulation shall be expanded polystyrene insulation or extruded polystyrene insulation. 5 Horizontal insulation shall be extruded polystyrene insulation. DETAILED METHOD FOR HEATED BUILDINGS In practice, there are many different combinations of vertical and horizontal insulation details, R-values, and footing depths that can be used in an FPSF. The detailed design approach is a flexible approach that allows the designer to utilize experience and select the preferred method of construction for a given site. For example, the designer may opt to provide vertical wall insulation only, wing insulation only at the corners, or provide wing insulation around the entire building. The designer also has the flexibility to step the footing to increase foundation depths, add wing insulation to reduce required foundation depths, or select the width of wing insulation in meeting the minimum requirements in the design process. Figure 5 illustrates the variables for FPSF design. The Appendix contains figures and tables for determining and selecting the detailed design variables. The following steps outline the detailed design approach for heated buildings. Step 1: Determine the Site's design Air Freezing Index, Fn Select the 100-year return period design air freezing index, F100, from Figure A1 or Table A3. This information was prepared by the National Oceanic and Atmospheric Administration's National Climatic Data Center specifically for use in FPSF design. The Fn values are conservative because they are not adjusted for the insulating benefit of a normal snow cover on the ground. A lower return period value may be used for less important structures or those that are resilient to infrequent ground freezing. See Table 3 for Fn values at return periods less than 100 years. Step 2: Calculate the R-value for the Floor Slab Cross Section, Rf Calculate the thermal resistance of the design floor slab, Rf, considering all insulating materials in the cross-section including any floor coverings. When determining Rf, dry condition R-values, as presented in Table A2, shall be usedfor all materials, including insulation. If the floor cross section and resulting thermal resistance of the floor slab varies over its area, calculate Rf as the average over the perimeter 39 inches (1 m) of the floor. In superinsulated slabs where the calculated Rf value exceeds 28 R (5.0 m2-oC/W), the designer must follow the design procedure for unheated buildings, since the heat from the building is substantially blocked from moving into the ground and protecting the foundation. Step 3: Determine the Required R-value of Vertical Wall Insulation, Rv Determine the minimum required thermal resistance of the vertical wall insulation, Rv, from Table A4 given h from Figure 5, Fn from Step 1, and Rf from Step 2. Step 4: Select Thickness of Vertical Wall Insulation Based on the required Rv value from Step 3, select an adequate thickness of vertical XPS or EPS insulation using the following effective resistivities: Type II EPS - 2.4 R per inch, Type IX EPS - 3.2 R per inch and Types IV, V, VI, VII XPS - 4.5 R per inch. Common nominal thicknesses are 1", 1-1/2", 2", and 3". The insulation shall extend from the bottom of the footing to the exterior wall envelope as shown in Figure 5. Step 5: Select Foundation Depth or Horizontal Wing Insulation for Walls Horizontal wing insulation is placed below ground extending outward from the vertical wall insulation as shown in Figure 5. For climates where Fn is less than 2,250oF-day, wing insulation along the walls is not required and the designer may proceed to Step 7. In more severe climates, where horizontal wing insulation is not desired, select the minimum foundation depth, Hf, from Table A5. When the designer desires to limit the foundation depth to 16 inches in severe climates (Fn > 2,250oF-days), select the minimum width the wing shall extend from the foundation wall, Dhw, and the required minimum thermal resistance of insulation, Rhw, from Table A6. Step 6: Select Thickness of Horizontal Wing Insulation for Walls Based on the required Rhw value determined in Step 5, select an adequate thickness of XPS wing insulation using an effective resistivity of 4.5 R per inch. Wing insulation must have at least 10 inches of ground cover and meet flush with the vertical wall insulation. Step 7: Select Foundation Depth or Horizontal Wing Insulation at Corners Since more heat loss occurs at building corners than through mid-wall sections of heated buildings, additional frost protection in the form of horizontal wing insulation or a deeper foundation is required for more severe climates (Fn > 2,250oF-days). Where horizontal wing insulation is not desired in any climate, select the minimum foundation depth at corners, Hfc, from Table A5. For a uniform foundation depth, Hf = Hfc, use Table A5 with 5.7R wing insulation applied to the corner regions. When the designer desires to limit the foundation depth at corners to 16 inches, select the minimum width the wing shall extend from the foundation wall, Dhc, and the required minimum required thermal resistance of insulation, Rhc, from Table A7. Based on the required Rhc, select an appropriate thickness of XPS wing insulation using an effective resistivity of 4.5 R per inch. The minimum distance the corner protection must extend from the corners, Lc, is also determined from Table A7. DETAILED METHOD FOR UNHEATED BUILDINGS In following the detailed design procedure for unheated buildings, the designer has the flexibility to increase foundation depths to reduce ground insulation requirements. Figure 6 illustrates the variables for FPSF design of unheated buildings. The Appendix contains figures and tables for determining and selecting the design variables. The following steps outline the detailed design approach for unheated buildings. Step 1: Determine Air Freezing Index, Fn, & Mean Annual Temp., MAT Select the 100-year return period design air freezing index, F100, from Figure A1 or Table A3. This information was prepared by the National Oceanic and Atmospheric Administration's National Climatic Data Center specifically for use in FPSF design. The Fn values are conservative because they are not adjusted for the insulating benefit of a normal snow cover on the ground. A lower return period value may be used for less important structures or those that are resilient to infrequent ground freezing, such as detached garages. See Table A3 for Fn values at return periods less than 100 years. Step 2: Select Placement of Ground Insulation A continuous ground insulation layer with a 6-inch gravel or other non-frost susceptible base must be placed below the entire foundation of unheated buildings. The ground insulation must extend outside the foundation a minimum width, Dg, determined from Table A8 and shown in Figure A3. In unheated building foundations, Dg is the same at both corner and wall locations. Outside the foundation perimeter, the insulation must have a minimum of 10 inches of soil cover. Dg may be reduced by 1 inch for every inch the insulation is buried beyond the 10 inch minimum cover. Step 3: Select the Minimum Effective R-value of Ground Insulation, Rg Select the minimum required R-value, Rg, required for the ground insulating layer from Table A8 based on Fn and MAT from Step 1. Rg may be reduced by 0.3 R for every 1-inch the underlying non-frost susceptible layer is increased beyond the 6 inch minimum thickness. Rg may also be reduced by 0.25 R for every 1-inch increase in soil cover, above the 10 inch minimum, over the ground insulation. Step 4: Select Thickness of Ground Insulation Based on the required Rg value determined in Step 3, select an adequate thickness of XPS ground insulation assuming an effective resistivity of 4.5 R/inch. Recommended nominal thicknesses of XPS are 1", 1-1/2", 2", and 3". In severe climates, insulation may need to be layered to meet the required thickness. Small Unheated Areas in Otherwise Heated Buildings Where small, (as defined in Figure A4 of the Appendix), unheated perimeter parts of an otherwise heated building are encountered, follow the design procedure for heated buildings and address the small unheated part as follows: Continue the vertical wall insulation of the heated part of the building along the exterior face of the small unheated part. Consider the small unheated area a corner location and provide protection, with wing insulation or increased foundation depth if desired, according to Step 7 of the design procedure for heated buildings. Provide ground insulation as required for unheated buildings under the small unheated area. Large Unheated Areas in Heated Buildings When an unheated building area does not meet the conditions for a small building part, it is considered a large unheated area. This situation is commonly encountered in homes with unconditioned attached garages. When large unheated areas are encountered, regard the heated and unheated sections as separate buildings and design the foundations accordingly. If the anticipated operating conditions of the building are such that the lowest average internal monthly temperature of a building falls between 41 oF (5 oC) and 63 oF (17 oC), the building is considered semi-heated. In this case, design the foundation as a heated building and increase the minimum required foundation depth by 8 inches in both wall and corner areas. RECOMMENDED CONSTRUCTION METHODS AND DETAILS The construction of FPSF is similar to that of conventional foundation construction except for the insulation details. Many of the steps, such as rough grading, foundation layout, the preparation of subgrade, wall forming, steel reinforcing and casting of the concrete, are all similar to conventional practices. FPSF can be constructed using one of many approaches, including a monolithic slab-on-grade, a independent slab and stem wall, or a permanent wood foundation. Although the details shown here illustrate techniques for the construction of new homes, the FPSF technique can also be applied to additions to homes with existing conventional foundations, and even walk out basements. Each of these options is explained in this section. The FPSF construction details shown here reflect satisfactory practices for several typical situations. The drawings shown are generic in that they must be modified to meet the requirements specific to the site, such as insulation dimensions. Of key importance is the proper placement and sizing of the insulation. Regardless of the option chosen, a few issues apply to the construction of any FPSF: * Cold bridges. Cold bridges are created when building materials with high thermal conductivity, such as concrete, are directly exposed to outside temperatures (see Figure 7). Foundation insulation should be placed such that continuity is maintained with the insulation of the house envelope. Cold bridges may increase the potential for frost heave, or at the least, create localized lower temperatures or condensation on the slab surface. Care must be taken during construction to ensure proper installation of the insulation. * Drainage. Good drainage is important with any foundation and FPSF is no exception. Insulation performs better in drier soil conditions. Ensure that ground insulation is adequately protected from excessive moisture through sound drainage practices, such as sloping the grade away from the building. Insulation should always be placed above the level of the ground water table. A layer of gravel, sand, or similar material is recommended for improved drainage as well as to provide a smooth surface for placement of any horizontal wing insulation. A minimum 6-inch drain layer is required for unheated FPSF designs. Beyond the 12-inch minimum foundation depth required by building codes, the additional foundation depth required by an FPSF design may be made up of compacted, non-frost susceptible fill material such as gravel, sand, or crushed rock. * Slab surface temperatures (moisture, comfort, and energy efficiency). The minimum insulation levels prescribed in this design procedure protect the foundation soil from frost. They also provide satisfactory slab surface temperatures to prevent moisture condensation and satisfy a minimum degree of thermal comfort. Since the design procedure provides minimum insulation requirements, the foundation insulation may be increased to meet special needs concerning these issues and energy efficiency. Successfully limiting cold bridging is critical -- use of the stem wall and slab technique (described later), in effect, adds a second thermal break between the slab and stem wall. Increasing the vertical wall insulation thickness above the minimum requirements for frost protection will also improve energy efficiency and thermal comfort. Selection of a finish floor material such as carpeting decreases the surface contact between occupant and the slab, giving a warmer feel. * Heated slabs and energy efficiency. This design procedure can be applied to all slab-on-grade techniques, including those with in-slab heat which provide excellent thermal comfort. If an in-slab heating system is used, additional insulation below the slab and around the perimeter is recommended for improved energy efficiency. * Protecting the insulation. Because the vertical wall insulation around a foundation extends above grade and is subject to ultraviolet radiation and physical abuse, that portion must be protected with a coating or covering that is both tough and durable. Some methods to consider are a stucco finish system or similar brush-on coatings, pre-coated insulation products, flashings, and pressure treated plywood. The builder should always verify that such materials are compatible with the insulation board. The protective finish should be applied before backfilling, since it must extend at least four inches below grade. Also, polystyrene insulation is easily broken down by hydrocarbon solvents such as gasoline, benzene, diesel fuel, and tar. Care should be taken to prevent insulation damage during handling, storage, and backfilling. Also, where termites are a concern, standard preventative practice such as soil treatment, termite shields, etc. is suggested. * Insulation specifications. Because some insulation materials resist water absorption less effectively than others, which in turn degrades their thermal resistance (R-values), insulation material should be specified carefully. The following effective R-values shall be used to determine insulation thicknesses required for this application: Type II expanded polystyrene - 2.4 R per inch; Types IV, V, VI, VII extruded polystyrene - 4.5 R per inch; Type IX expanded polystyrene - 3.2 R per inch. Special applications, such as bearing structural loads from footings, may require higher density polystyrenes for the required compressive strengths. The builder is referred to manufacturers for product-specific information. * Doorways and Thresholds. At doorways where the threshold overhangs the vertical wall insulation, the insulation should be cut out as required to provide solid blocking for adequate bearing and fastening of the threshold. The size of the cut-outs should be minimized. * Landscaping and wing insulation. In situations where wide horizontal wing insulation is required (e.g., greater than 3 to 4 foot widths), this may deter the location of large plantings close to the home. In some of these cases, using thicker wing insulation or increasing the foundation depth will decrease the required width of the wing insulation. * Foundation height. Given that most polystyrene insulation boards are typically available in 24 inch and 48 inch widths, 24 inches becomes a practical height for many foundations. This provides 16 inches of the foundation below grade and 8 inches above grade. * Excavation. Generally, lightweight equipment is adequate for FPSFs because little excavation is required. As with any foundation, organic soil layers (top soil) should be removed to allow the foundation to bear on firm soil or compacted fills. * Construction scheduling. The foundation should be completed and the building enclosed and heated prior to the freezing weather, similar to conventional construction practice. Construction of an FPSF monolithic slab is very similar to that of a conventional slab. (See Figures 8 and 9). The only step requiring additional consideration is whether to place the insulation before or after the pouring of the slab. Tacking the vertical insulation in place with nails after the pour is a simple technique, made even easier if it is done when the concrete is new, or "green". As an alternative installation technique, the vertical insulation boards may be tacked in place to the inside of the formwork before the pouring of concrete. This technique allows the insulation to adhere, or if nails are used, to fasten to the concrete and eliminates the necessity for securing the insulation to the wall later. If the insulation is installed in this manner, it should be noted that the foundation line is actually at the inside face of the insulation board, which may affect overall dimensions used for framing. If both vertical and horizontal insulation are used, the vertical wall insulation should be placed first. The horizontal insulation, where used, is then placed directly on the subgrade, or on the gravel drainage layer, if provided. Measures should be taken to ensure a smooth bed for the horizontal insulation, and sub-slab insulation, if used. INDEPENDENT SLAB AND STEM WALL As noted previously, FPSF can also be adapted to foundations constructed with an independent stem wall and ground supported slab. (See Figures 10, 11, and 12). The stem wall and slab technique has the same insulation and drainage requirements as a monolithic slab. If a separate footing is used, as may be required by local soil conditions or code requirements, the footing must be located below the insulation. In any case, any horizontal insulation should be a minimum 10 inches below grade. The wall may be constructed of poured concrete, concrete masonry, wood or other acceptable materials. Some examples are shown below. Additional measures are required for an unheated building. While a drainage layer is only recommended under wing insulation for heated buildings, a 6-inch drainage layer is required under unheated FPSF designs. Additionally, the horizontal ground insulation extends not only as a wing beyond the perimeter of the building, but continues under the entire unheated portion of the building. This insulation layer can be installed either directly under the slab as shown in Figure 13, or entirely at one level as shown in Figure 14. In either case, the compressive load of the building on the insulation must be determined to compare to the compressive resistance of the foam (see design examples). The horizontal insulation must have a minimum of 10 inches of soil cover. UNHEATED AREAS IN OTHERWISE HEATED BUILDINGS Many heated buildings may have small portions of its footprint which are unheated, such as entries and porches, and therefore require special consideration (see Figure 15). The design for these cases is outlined in steps earlier in this design guideline. In the case of a home with an attached garage, a cold bridge is created between the two slabs, requiring insulation to break the bridge (see Figure 16). Common Questions and Answers About Frost-Protected Shallow Foundations A number of questions have been posed by building officials, builders, engineers, and others about frost-protected shallow foundations (FPSF). The most common questions and their answers are provided here to benefit those interested in this technology. Question No. 1: How does insulation stop frost heave from occurring? Frost heave can only occur when all of the following three conditions are present: 1) the soil is frost susceptible (large silt fraction), 2) sufficient moisture is available (soil is above approximately 80 percent saturation), and 3) sub-freezing temperatures are penetrating the soil. Removing one of these factors will negate the possibility of frost damage. Insulation as required in this design guide will prevent underlying soil from freezing (an inch of polystyrene insulation, R4.5, has an equivalent R-Value of about 4 feet of soil on average). The use of insulation is particularly effective on a building foundation for several reasons. First, heat loss is minimized while storing and directing heat into the foundation soil -- not out through the vertical face of the foundation wall. Second, horizontal insulation projecting outward will shed moisture away from the foundation further minimizing the risk of frost damage. Finally, because of the insulation, the frost line will rise as it approaches the foundation. Since frost heave forces act perpendicular to the frost line, heave forces, if present, will act in a horizontal direction and not upwards. Question No. 2: Does the soil type or ground cover (e.g., snow) affect the amount of insulation required? By design, the proposed insulation requirements are based on the worst-case ground condition of no snow or organic cover on the soil. Likewise, the recommended insulation will effectively prevent freezing of all frost-susceptible soils. Because of the heat absorbed (latent heat) during the freezing of water (phase change), increased amounts of soil water will tend to moderate the frost penetration or temperature change of the soil-water mass. Since soil water increases the heat capacity of the soil, it further increases the resistance to freezing by increasing the soil's "thermal mass" and adding a significant latent heat effect. Therefore, the proposed insulation requirements are based on a worst-case, silty soil condition with sufficient moisture to allow frost heave but not so much as to cause the soil itself to drastically resist the penetration of the frost line. Actually, a coarse grained soil (non-frost susceptible) which is low in moisture will freeze faster and deeper, but with no potential for frost damage. Thus, the proposed insulation recommendations effectively mitigate frost heave for all soil types under varying moisture and surface conditions. Question No. 3: How long will the insulation protect the foundation? This question is very important when protecting homes or other structures which have a long life expectancy. The ability of insulation to perform in below-ground conditions is dependent on the product type, grade, and moisture resistance. In Europe, polystyrene insulation has been used to protect foundations for nearly 40 years with no experience of frost heave. Thus, with proper adjustment of R-values for below-ground service conditions, both extruded polystyrene (XPS) and expanded polystyrene (EPS) can be used with assurance of performance. In the United States, XPS has been studied for Alaskan highway and pipeline projects, and it has been found that after 20 years of service and at least 5 yrs of submergence in water that the XPS maintained its R-value (ref. McFadden and Bennett, Construction in Cold Regions: A Guide for Planners, Engineers, Contractors, and Managers , J. Wiley & Sons, Inc., 1991. pp328-329). For reasons of quality assurance, both XPS and EPS can be readily identified by labelling corresponding to current ASTM standards. Question No. 4: What happens if the heating system fails for a time during the winter? For all types of construction, heat loss through the floor of a building contributes to geothermal heat storage under the building, which during the winter is released at the foundation perimeter. Using insulated footings will effectively regulate the stored heat loss and retard penetration of the frost line during a period of heating system failure or set-back. Conventional foundations, with typically less insulation, do not offer this level of protection and the frost may penetrate more quickly through the foundation wall and into interior areas below the floor slab. With ad-freezing (the frozen bond between the water in the soil and the foundation wall), frost does not need to penetrate below footings to be dangerous to light construction. In this sense, frost protected footings are more effective in preventing frost damage. The proposed insulation requirements are based on highly accurate climate information verified by up to 86 years of winter freezing records for over 3,000 weather stations across the United States. The insulation is sized to prevent foundation soil freezing for a 100-year return period winter freezing event with a particularly rigorous condition of no snow or ground cover. Even then, it is highly unlikely that during such an event their will be no snow cover, sufficiently high ground moisture, and an extended loss of building heat. Question No. 5: Why are greater amounts of insulation needed at the corners of the foundation? Heat loss occurs outward from the foundation walls and is, therefore, intensified at the proximity of an outside corner because of the combined heat loss from two adjacent wall surfaces. Consequently, to protect foundation corners from frost damage, greater amounts of insulation are required in the corner regions. Thus, an insulated footing design will provide additional protection at corners where the risk of frost damage is higher. Question No. 6: What experience has the U.S. seen with this technology? Frost protected insulated footings were used as early as the 1930s by Frank Lloyd Wright in the Chicago area. But since that time, the Europeans have taken the lead in applying this concept over the last 40 years. There are now over 1 million homes in Norway, Sweden, and Finland with insulated shallow footings which are recognized in the building codes as a standard practice. In the United States, insulation has been used to prevent frost heave in many special engineering projects (i.e., highways, dams, pipelines, and engineered buildings). Its use on home foundations has been accepted by local codes in Alaska, and it has seen scattered use in uncoded areas of other states. It is likely that there are several thousand homes with variations of frost protected insulated footings in the United States (including Alaska). To verify the technology in the United States, five test homes were constructed in Vermont, Iowa, North Dakota, and Alaska. The homes were instrumented with automated data acquisition systems to monitor ground, foundation, slab, indoor, and outdoor temperatures at various locations around the foundations. The performance observed was in agreement with the European experience in that the insulated footings prevented the foundation soil from freezing and heaving even under rigorous climatic and soil conditions (ref. U.S. Department of Housing and Urban Development, "Frost Protected Shallow Foundations for Residential Construction", Washington, DC, 1993). Question No. 7: How energy efficient and comfortable are slab foundations with frost protected footings? The insulation requirements for frost protected footings are minimum requirements to prevent frost damage. The requirements will provide a satisfactory level of energy efficiency, comfort, and protection against moisture condensation. Since these requirements are minimums, additional insulation may be applied to meet special comfort objectives or more stringent energy codes. FPSF DESIGN EXAMPLES The following examples demonstrate the design process for FPSFs presented in this guide. Variations from the designs and details shown which meet the minimum requirements of this guide would also be acceptable. EXAMPLE A: Site Information Approximate LocationChicago, IL Building Type52 X 26 foot heated home Foundation Typemonolithic slab-on-grade Floor Finishcarpet with fibrous pad The design of this FPSF follows the basic procedure for heated buildings. Step A1: From Figure A1 for the Chicago area, F100 = 1,500 oF-day (Table A3 gives F100 = 1,433 oF-days) Rf for the floor slab cross section = Rf,carpet + Rf,concrete Rf = 2.08 + (0.05 R/in x 4 in) = 2.28 R. (See Table A2 for values.) From Table A4 for h = 12" and Fn = 1,500 oF-day: Rv = 4.5 hr-ft-oF/Btu. The minimum thickness of wall insulation is Rv/reff. In the case of XPS, 4.5/4.5 = 1.0 inch minimum XPS required. In the case of Type II EPS, 4.5/2.4 = 1.9 inches (min) required. Therefore, use 2.0 inches of Type II EPS. From Table A5 for Fn = 1,500 oF-day or less, a 12" foundation depth is acceptable and no wing insulation is required along the wall. The design may proceed to Step 7. From Table A5 for Fn = 1,500 oF-day or less, a 12" foundation depth at the corners is acceptable and no corner wing insulation is required. EXAMPLE B: Site Information Approximate LocationBismarck, ND Building Type42 X 26 foot heated home Foundation Type8 inch CMU stemwall footing, with slab-on-grade Floor Finishcarpet with rubber pad Slab/Floor Insulation1-inch Type IX EPS (under slab), 4-foot width The design of this FPSF follows the basic procedure for heated buildings. Step B1: From Figure A1 for the Bismarck area, F100 = 3,700 oF-day (Table A3 gives F100 = 3,359 oF-day). Rf for the floor slab cross section = Rf,carpet + Rf,concrete + Rf,xps Rf = 1.23 + (0.05 R/in x 4") + (4.2 R/in x 1") = 5.63 R. (See Table A2 for values) From Table A4 for h = 12", Rf = 5.6, and Fn = 3,700 oF-day: Rv = 5.7 hr-ft-oF/Btu. The minimum thickness of wall insulation = Rv/reff. In the case of XPS, 5.7/4.5 = 1.3 inches minimum of XPS required, therefore use 1.5 inches XPS insulation. In the case of Type IX EPS, 5.7/3.2 = 1.8 inches (min) required, therefore use 2.0" of Type IX EPS vertical wall insulation. From Table A5 for Fn = 3,700 oF-day, a minimum foundation depth of Hf = 30" would avoid the use of wing insulation along the walls. For this example, however, the desired foundation depth is 16". From Table A6, select wing width, Dhw, of 36" with a required thermal resistance, Rhw, of 7.7 hr-ft-oF/Btu. The minimum thickness of XPS wing insulation = Rhw/reff 7.7/4.5 = 1.7 inches minimum of XPS required therefore, use 2.0 inch thick XPS wing insulation along the walls. Step B7: From Table A5 for Fn = 3,700 oF-day a minimum foundation depth of Hfc = 50" for a distance of Lc = 60" from the corner would avoid the use of wing insulation at the corners, but require a stepped footing. Alternatively, from Table A5, using 5.7R wing insulation with Dhc = 24" and Lc = 60" would allow a uniform foundation depth of 30" with no wing required along the midwall areas. For this example, the desired foundation depth is 16". From Table A7 by interpolation between values, select a wing width, Dhc, of 36" with a required thermal resistance, Rhc, of 11.7 hr-ft-oF/Btu extended for a distance, Lc, of 60" along the walls. The minimum thickness of XPS wing insulation = Rhc/reff or 11.7/4.5 = 2.6 inches minimum of XPS. Therefore, use 3.0" of XPS wing insulation at the corner locations. Example C Site Information Approximate LocationDuluth, MN Building Type16 X 24 foot unheated garage Foundation Type8 inch wide concrete stemwall, with slab-on-grade Unheated Areasentire building The design of this FPSF follows the basic procedure for unheated buildings. From Figures A1 and A2 for the Duluth area, F100 = 3,000 oF-day and MAT = 38 oF. From Table A8 for Fn = 3,000 oF-day, Dg = 79 inches, therefore extend the ground insulation 6' 7" outside the building. From Table A8 for MAT = 38 oF and Fn = 3,000 oF-day, Rg = 18.2 hr-ft-oF/Btu. In order to reduce the amount of insulation required in Step 4, increase the minimum 6" non-frost susceptible soil layer to 8" and design for Rg = 18.2 - 0.6 = 17.6 hr-ft-oF/Btu. Step C4: The minimum thickness of XPS ground insulation = Rg/reff. 17.6/4.5 = 3.9 inches (min) of XPS required, therefore use a 4" layer of XPS ground insulation. Because the foam in this case is bearing structural loads from footings, a higher density polystyrene is used for the required compressive strengths. The compressive resistance of 1.6 pcf XPS (Type IV) per ASTM C 578-92 is 25 psi (3600 psf). For greater bearing capacity a more dense type may be specified, such as 1.8 pcf XPS (Type VI) with a compressive resistance of 40 psi (5760 psf). Using a safety factor of 3, the allowable bearing load is 5760/3 = 1920 psf. This bearing capacity is sufficient for this single-story, light-frame construction example including normal dead, live, and snow loads. It is important to ensure that the insulation is placed firmly on a smooth layer of bedding material such as sand or gravel. American Society of Testing and Materials (ASTM). ASTM C 578 - 92 Standard Specification for Rigid, Cellular Polystyrene Thermal Insulation. Philadelphia, PA (1992). . Moisture Control in Buildings-Chapter 4: Effects of Moisture on the Thermal Performance of Insulating Materials. ASTM Manual Series: MNL 18. Heinz R. Trechsel, Editor. Philadelphia,PA (1994). Building Officials and Code Administrators International, Inc., (BOCA) National Building Code. Country Hills, IL (1990). Committee on Frost Actions in Soils. Frost I Jord (Frost Action in Soil). Nr. 17, Oslo, Norway (November 1976); in Norwegian. Comité Européen de Normalisation (CEN). Building Foundations--Protection against Frost Heave. Preliminary draft for proposed European Standard N185, CEN TC 89/WG5 (August 1992). Council of American Building Officials (CABO). Model Energy Code. Falls Church, VA (1992). . One- and Two-Family Dwelling Code. Falls Church, VA (1992). Crandell, Jay H., Peter M. Steurer, and William Freeborne. Demonstration, Analysis, and Development of Frost Protected Shallow Foundations and Freezing Index Climatography for Residential Construction Applications in the United States. Development of Frost Protected Shallow Foundations and Freezing Index Climatography for Residential Construction Applications in the United States.Proceedings of the 7th International Cold Regions Engineering Specialty Conference. Edited by D.W. Smith and D.C. Sego. Canadian Society for Civil Engineering, Montreal Quebec, (1994). Farouki, Omar. European Foundation Designs for Seasonally Frozen Ground. U.S. Army Corps of Engineers, Cold Regions Research & Engineering Laboratory, Monograph 92-1, Hanover, NH (March 1992). International Conference of Building Officials (ICBO). Uniform Building Code. Whittier, CA (1991). Jones, C.W., D.G. Miedema, and J.S. Watkins. Frost Action in Soil Foundations and Control of Surface Structure Heaving. U.S. Department of the Interior, Bureau of Reclamation, Engineering Research Center, Denver, CO (1982). Labs, Kenneth, et al. Building Foundation Design Handbook. Prepared for the Oak Ridge National Laboratory by the University of Minnesota/Underground Space Center (May 1988); distributed by NTIS, Springfield, VA. (Note: this publication is also available in shortened form as the Builder's Foundation Handbook.) Morris, Richard A. Frost-Protected Shallow Foundations: Current State-of-the-Art and Potential Application in the U.S. Prepared for Society of the Plastics Industry, Inc. NAHB Research Center, Upper Marlboro, MD (August 1988). Norwegian Building Research Institute. Frost-Protected Shallow Foundations for Houses and Other Heated Structures, Design Details. Forskningsveien 3b, Postboks 322, Blindern 0314, Oslo 3, Norway. Translated by the NAHB Research Center (January 1988). Southern Building Codes Congress International, Inc. (SBCCI). Standard Building Code. Birmingham, AL (1991). Steurer, Peter M. and Jay H. Crandell. Comparison of the Methods Used to Create an Estimate of the Air-Freezing Index. National Oceanic and Atmospheric Administration, National Climatic Data Center, Asheville, NC (March 1993). Steurer, Peter M. Methods Used to Create an Estimate of the 100-Year Return Period of the Air-Freezing Index. U.S. Department of Commerce, National Oceanic and Atmospheric Administration, National Climatic Data Center, Asheville, NC (1989); Appendix of SPI Phase II report.
0.7519
FineWeb
```json [ "Frost Protected Shallow Foundations", "Foundation Construction", "Insulation and Energy Efficiency" ] ```
Nine Stars DZT-24-1 6.3 Gallon Stainless Steel Infrared Trashcan Features and Benefits: - Made of commercial grade stainless steel, so you never need to worry about rust or water damage - Patented infrared sensor detects motion within ten inches, so you no longer have to have contact with a dirty trash can - ABS plastic lid - the highest grade of plastic out in the market - will not crack or change shape or color due to climate variations - Every lid has been adjusted so that it closes tightly, ensuring odors will stay in the trash can - Runs for more than 10,000 open and closes (up to a year of usage) on one change of batteries - Fits anywhere in your bathroom, kitchen, or office - 5 Gallons capacity Dimensions and Weight: - Size: 14.5'' x 10.5'' x 18.25'' - Weight: 8.2 lbs This appliance complies with the U.S. Electrical Standard (120V, 60 Hz).
0.5998
FineWeb
["Features and Benefits", "Dimensions and Weight", "Product Specifications"]
Information about registering a .holdings domain nameYou can register .holdings domain names with us! The things you have to note about holdings registration: Price of registration: 20 000 HUF+VAT Minimum registration interval: 1 year Country associated with the extension: Holdings Domain registration rules: http://www.donuts.domains/about/policies/ Registration limitation: No requirement, everyone can register This extension can be registered online in our automated domain registration system! Register your .holdings domain in real time! The price contains the following services: - .holdings domain name registration for 1 year - Free DNS service (with web-based real-time administration interface) - Free forwarding (e.g. to free webspace) Accepted payment methods: - Online payment methods (creditcard/PayPal/2CheckOut) - Bank transfer
0.9454
FineWeb
```json [ "Registration Information", "Pricing and Services", "Payment Methods" ] ```
The relationship between Stokely and, um, Martin Luther King. Well, the, the relationship between Stokely and Martin was I think a very warm kind of relationship. Ah, we all knew that we had differences in terms of strategies and our tactics. I'll give you one example, that Martin believed in non-violence as a way of life. Ah, our concern about non-violence was only tactical. We used it when it became important for your survival to use non-violence. But we recognized that and we discussed those kinds of things and, ah, we could, we were able to resolve it. What happens in a lot of instances, is, is that the press began to use these differences even though they might have been completely minor to, ah, create rifts and try to break up the unity that existed within the, in the Civil Rights Movement. And I'm not saying that we didn't disagree tactically, organizationally, we did. But I think there was a personal, ah, relationship among SNCC people and SCLC people that was, that was very good and very healthy. Now, those relationships were strained at different times but we always managed to work our way through it. We had respect for some of the organizers. They had respect for many of our people based on our dedication and based on our commitment and based on our ability to go into those areas, those recalcitrant areas in Mississippi and fight for social justice and democracy for all.
0.5832
FineWeb
["Civil Rights Movement", "Relationship between Stokely and Martin Luther King", "Non-violence as a tactic"]
Paraconsistent mathematics is a type of mathematics in which contradictions may be true. In such a system it is perfectly possible for a statement A and its negation not A to both be true. How can this be, and be coherent? What does it all mean? And why should we think mathematics might actually be paraconsistent? We'll look at the last question first starting with a quick trip into mathematical history. Hilbert's programme and Gödel's theorem David Hilbert, 1862 - 1943. In the early 20th century, the mathematician David Hilbert proposed a project called Hilbert's programme: to ground all of mathematics on the basis of a small, elegant collection of self-evident truths, or axioms. Using the rules of logical inference, one should be able to prove all true statements in mathematics directly from these axioms. The resulting theory should be sound (only prove those statements that really are true), consistent (free from contradictions) and complete (it should be able to either prove or disprove any statement). One should also be able to recognise that the axioms are sound by finitary means — that is, minds limited to finitely many inferences, such as human minds, should be able to recognise the axioms as sound. However, Kurt Gödel famously proved that this was impossible, at least in the sense that mathematicians of the time had in mind. His first incompleteness theorem, loosely stated, says that: As an example, consider a formal theory T, that is a system of mathematics based on a collection of axioms. Now consider the following statement G: G: G cannot be proved in the theory T. If this statement is true, then there is at least one unprovable sentence in T (namely G), making T incomplete. On the other hand, if sentence G can be proved in T, we reach a contradiction: G is provable, but by virtue of its content, can also not be proven. There is a dichotomy: we must choose between incompleteness and inconsistency. Gödel showed that a sentence such as G can be created in any theory sophisticated enough to perform arithmetic. Because of this, mathematics must be either incomplete or inconsistent. (See the Plus article Gödel and the limits of logic for more on this.) Classically-minded scholars accept that mathematics must be incomplete, rather than inconsistent. In line with common intuitions they find contradictions, and inconsistency, abhorrent. However, it is important to note that accepting a small selection of contradictions need not commit you to a system full to the brim with contradictions. We shall explore this idea further shortly. For now, let's turn to a couple of cases where an paraconsistent position can provide a more elegant solution than the classical position: the paradoxes of Russell and the liar. During his work attempting to establish the logical foundations of mathematics, Betrand Russell discovered a paradox, now eponymously known as Russell's paradox. It concerns mathematical sets, which are just collections of objects. A set can contain other sets as its members, consider for example the set made up of the set of all triangles and the set of all squares. A set can even be a member of itself. An example is the set T containing all things that are not triangles. Since T is not a triangle, it contains itself. Russell's paradox reads as follows: Bertrand Russell, 1872-1970. To be a member of itself, R is required not to be member of itself. Thus if R is in R, then R is not in R, and vice versa. It looks like a fairly serious problem. So-called naive set theory is not equipped to deal with such a paradox. Classical mathematics is forced to endorse a much more complicated version of set theory to avoid it. We will look at the classical response, and then a paraconsistent approach. But first, what is naive set theory? It is founded on two principles: - The principle of abstraction, which states (roughly) that given any property, there is a set collecting together all things which satisfy that property. For example, "being black" is a property, so there is a set consisting of all black things. - The principle of extensionality, which states that two sets are the same if and only if their members are the same. These principles capture an intuitive understanding of what sets are and how they work. However, to avoid contradictions and paradoxes, classical mathematicians regularly adopt a more complicated stance, accepting a more complex version of set theory called Zermelo-Fraenkel set theory (ZF). It discards the principle of abstraction, and replaces it with around eight more involved axioms. These postulated axioms change the way one is able to create a set. In general, to create a set in ZF one uses pre-existing sets to make more (see the box on the right for an idea of how the process works). Certain sets, such as the empty set, exist without needing to be constructed. The collection of sets that can be formed by building in such a way is referred to as the cumulative hierarchy or the Von Neumann universe. The sets built in ZF are given a rank based on how many times one has used the set building rules to create them. The empty set is rank 0, those built from the empty set directly are rank 1, and so forth. In ZF, Russell's set cannot exist, and thus Russell's paradox is avoided. Sets are built from the bottom up; you first need to have hold of a set before you can include it into another set. To create the Russell set, the Russell set is required, so building it using the axioms of ZF is impossible. Couching this in terms of the rank of the set, Russell's set would need to be of some rank, n, but also n+1 (and n+2 and n+3 and so forth), because to be created it needs to be of a higher rank than itself. As this is not possible, the Russell set cannot be built. ZF avoids Russell's paradox, but at a cost. Instead of a set theory based on two simple premises, we are left with a much more complicated system. Complicated does not imply incorrect, however in this case it is difficult to motivate the array of different axioms which are needed for ZF. One can accuse the axioms of being ad hoc: used to avoid a particular problem rather than for a coherent, systemic reason. Moreover, ZF is an unwieldy system. Using a similarly complicated system, Russell and Whitehead needed 379 pages of work to prove that 1+1=2 in their Principia Mathematica, published in 1910 (here's the relevant page). Because of this, most mathematicians use something akin to naive set theory in their informal arguments, though they probably wouldn't admit it. There is a certain reliance on the idea that whatever their informal argument is, it is in principle reducible to something in a system such as ZF, and the details are omitted. This assumption may be problematic, especially where some very complicated results are supposedly proved. Classical mathematics does not appear to have the stable, workable, contradiction-free foundations that classical mathematicians hoped for. The liar paradox While Russell's Paradox is clearly directly applicable to mathematics, one can motivate paraconistency in mathematics indirectly through paraconsistent logic. If logic is paraconsistent, then mathematics built on this logic will be paraconsistent. Let us take a brief breather from mathematics and look at natural language. For millennia, philosophers have contemplated the (in)famous liar paradox: Alfred Tarski, 1902-1983. To be true, the statement has to be false, and vice versa. Many brilliant minds have been afflicted with many agonising headaches over this problem, and there isn't a single solution that is accepted by all. But perhaps the best-known solution (at least, among philosophers) is Tarski's hierarchy, a consequence of Tarski's undefinability theorem. In a nutshell, Tarski's hierarchy assigns semantic concepts (such as truth and falsity) a level. To discuss whether a statement is true, one has to switch into a higher level of language. Instead of merely making a statement, one is making a statement about a statement. A language may only meaningfully talk about semantic concepts from a level lower than it. Thus a sentence such as the liar's sentence simply isn't meaningful. By talking about itself, the sentence attempts unsuccessfully to make a claim about the truth of a sentence of its own level. The parallels between this solution to the liar paradox and the ZF solution to Russell's paradox are clear. However, looking at this second case shows that paradox or inconsistency is not merely a quirk of naive set theory, but a more widespread phenomenon. It seems that to avoid inconsistency, classicists are forced to adopt some arguably ad hoc rules not just about the nature of sets, but also about meaning. Besides, it intuitively seems that the liar sentence should be meaningful; it can be written down, is grammatically correct, and the concepts within it understood. How does a paraconsistent perspective address these paradoxes? The paraconsistent response to the classical paradoxes and contradictions is to say that these are interesting facts to study, instead of problems to solve. This admittedly runs counter to certain intuitions on the subject, but from a paraconsistent perspective, localised contradiction such as the truth and falsity of the liar sentence, does not necessarily lead to incoherence. How is this different from the classical view? For classicists, what is so bad about contradiction? Every mathematical proof is, in some way, a deduction from a specified collection of definitions and/or axioms, using assumed rules of inference to move from one step to the next. In doing this, mathematics is employing some type of logic or another. Classical mathematics uses classical logic, and classical logic is explosive. Because of Russell's paradox this page is a carrot. An explosive logic maintains that from a contradiction, you may conclude quite literally anything and everything. The logical principle is ex falso quodlibet, or "from a falsehood, conclude anything you like". If A and not-A are both true, then Cleopatra is the current Secretary-General of the United Nations General Assembly, and the page you are currently reading is, despite appearances, also a carrot. So why is classical logic explosive? Because it accepts the argument form reductio ad absurdum (RAA), meaning reduction to the absurd. We will see below that paraconsistent logicians can use a modified version of RAA, but for now let's just consider the classical version. To use classical RAA, one first makes an assumption. If further into the proof a contradiction arises, one is entitled to conclude that the initial assumption is false. Essentially, the idea is that if assuming something is true leads to an "absurd" state of affairs, a contradiction, then it was incorrect to make that assumption. This seems to work well enough in everyday situations. However, if contradictions can exist, say if Russell's set both is and is not a member of itself, then we can deduce anything. We merely have to assume its negation, and then prove ourselves "wrong". Thus contradiction trivialises any classical theory in which an inconsistency arises. Naive set theory, for example, is classically disinteresting, because it not only proves that 1+1=2, but also that 1+1=7. All because of Russell's paradox. So to the classical mathematician, finding a contradiction is not just unacceptable, it is utterly destructive. There is no classical distinction between inconsistency (the occurrence of a contradiction) and incoherence (a system which proves anything you like). Paraconsistent logic does not endorse the principle of explosion ex contradictione quodlibet, nor anything which validates it (notice the subtly different wording; "contradictione" in place of "falso"; this will become important later). The thought is this: suppose I have a pretty good theory that makes sense of a lot of the things I see around me, and suppose that somewhere in the theory a contradiction is hiding. Paraconsistent logicians hold that this does not (necessarily) make the theory incoherent, it just means one has to be very careful in the deductions one makes to avoid falling from contradiction into incoherence. For the most part, it makes no difference to us if the liar sentence really is both true and false, and the paraconsistent perspective reflects that. By removing RAA (or altering it as we see below), and making a few other tweaks to classical logic, we can create a logic and mathematical system where contradictions are both possible and sensible. Classicists knew they were inconsistent A donkey in your bedroom? There are further motivations for paraconsistency beyond those mentioned above. One such motivation is historical: at various times mathematicians worked with theories that they knew at the time to be inconsistent, but were still able to draw meaningful and useful conclusions. Set theory is one such area. The early calculus, as proposed by Isaac Newton, was another; its original formulation required that a quantity be small but non-zero at one stage of a calculation, but then to be equal to zero at a later stage. Despite the inconsistencies, mathematicians still adopted these theories and worked with them, drawing useful and sensible conclusions despite the presence of contradictions. Another motivation is the question of relevance of inference. That is, suppose I have proved that the Russell set is and is not a member of itself. Why should it follow from this that there is a donkey braying loudly in my bedroom? The question of relevance (just what has a donkey to do with set theory?) is one that has plagued classical logic for a long time, and is one that makes classical logic a hard pill to swallow to first-time students of logic, who are often told that "this is the way it is" in logic. Fortunately for those students, paraconsistency provides an alternative. Paraconsistent mathematics is mathematics where some contradictions are allowed. The term "paraconsistent" was coined to mean "beyond the consistent". The objects of study are essentially the same as classical mathematics, but the allowable universe of study is enlarged by allowing some inconsistent objects. One of the main projects of paraconsistent mathematics is to determine which objects are inconsistent, and which inconsistencies are allowed in a theory without falling into incoherence. It is a fairly recent development; the first person to suggest paraconsistency as a possible foundation of mathematics was Newton da Costa from Brazil (1958). Since then various areas have been investigated through the paraconsistent lens. An important first step towards developing paraconsistent mathematics is establishing a tool kit of acceptable argument forms. One charge that has been levelled against the paraconsistent mathematician is that the classical version of RAA is not allowed. Proofs by contradiction, reductio ad contradictione, are no longer allowed, since the conclusion could be a true contradiction, and the logic must allow for this case. Similarly, disjunctive syllogism is lost. Disjunctive syllogism states that if I can prove that A or B is true, and I can prove that A is false, then B must be true. However, paraconsistently, if A and not-A is a true contradiction, then B cannot be validly deduced. We do not receive any information about the truth of B from the fact A is not true, because it might also be true, thus satisfying the disjunction. Paraconsistentists are able to salvage a form of RAA. The classical mathematician does not distinguish between a contradiction and total absurdity; both are used to reject assumptions. However, from the paraconsistent viewpoint, not all contradictions are necessarily absurd. To someone with this view, classical RAA actually equates to reductio ad contradictione. The paraconsistentist can use a form which allows them to reject something which is genuinely, paraconsistently absurd. This take on RAA is used to reject anything which leads to a trivial theory (a theory in which everything is true). Likewise, while ex contradictione quodlibet (from a contradiction, anything follows) is out, ex absurdum quodlibet is still valid. The Penrose triangle. Allowing inconsistencies without incoherence opens up many areas of mathematics previously closed to mathematicians, as well as being a stepping stone to making sense of some easily described but difficult to understand phenomena. One such area is inconsistent geometry. M. C. Escher's famous drawings, for example, often contain impossible shapes or inconsistent ideas. His famous Waterfall depicts a waterfall whose base feeds its top. The Penrose triangle is another well-known example, the sides of which appear simultaneously to be perpendicular to each other and to form an equilateral triangle. The Blivet is another, appearing comprised of two rectangular box arms from one perspective, but three cylindrical arms from another. These pictures are inconsistent, but at the same time coherent; certainly coherent enough to be put down on paper. Paraconsistent mathematics may allow us to better understand these entities. Paraconsistency can also offer new insight into certain big-and-important mathematical topics, such as Gödel's incompleteness theorem. When Gödel tells us that mathematics must either be incomplete or inconsistent, paraconsistency makes the second option a genuine possibility. Classically, we assume the consistency of arithmetic and conclude that it must be incomplete. Under the paraconsistent viewpoint it is entirely possible to find an inconsistent, coherent and complete arithmetic. This could revive Hilbert's program, the project of grounding mathematics in a finite set of axioms: if the requirement for consistency is lifted, it may be possible to find such a set. The blivet, also known as the space fork. Another famous problem that appears in a new light under paraconsistency is the halting problem in computer science. It is the problem of finding an algorithm that will decide if any given algorithm working on any given input will ever halt. It is an important concern when addressing whether an algorithm will reach a solution to a problem in finite time, and is equivalent to many other decision problems in the discipline. However, consistent computer programs are unable to solve the problem, as famously proved by Alan Turing (see What computers can't do for a sketch of the proof). Paraconsistency re-opens the door to finding a solution. Paraconsistency in mathematics: mathematics where contradictions may be true. Is it as outlandish as it sounds? Probably not. As we have seen, paraconsistent mathematics elegantly deals with paradoxes to which classically mathematicians have had to find ad-hoc, complicated solutions to block inconsistency. There are also many areas in which paraconsistent mathematics may provide meaningful insights into inconsistent structures. It offers new insights to old problems such as Hilbert's program and the halting problem. Paraconsistency in mathematics: an interesting and promising position worthy of further exploration. - Inconsistent Mathematics by Chris Mortensen. - In Contradiction by Graham Priest. - The Stanford Encyclopedia article on paraconsistent logic. - The Internet Encyclopedia of Philosophy article on inconsistent mathematics. About the author Maarten McKubre-Jordens is a postdoctoral fellow at the University of Canterbury. As well as actually performing mathematics, he thinks about the foundation of mathematics in human reasoning. He brews his own beer, and loves to spend time with his family. He wishes to thank his wife Alexandra, for her virtually limitless patience in making this article user-friendly.
0.9786
FineWeb
```json [ "Paraconsistent Mathematics", "Foundations of Mathematics", "Mathematical Logic" ] ```
Working collaboratively in pairs, students will learn how to program and make a space themed video game using Kodu: Game Lab by Microsoft. Kodu Game Lab is a visual game program that allows anyone, even those without experience, to create 3D games. By combining elements from games such as Asteroids, Missile Command and Space Invaders, we will make a unique video game that students will create from scratch and also get to play! Students will work in pairs to design this game in which they can collaborate, strategize, and test with their friends, as developers do in real-world environments, using the 5-stage design process embraced across all artistic mediums. Students will be introduced to programming and logic skills by creating three dimensional maps, environments, unique game features, enemies and boss challenges. This class will enable students to program characters and objects, build 3D worlds, introduce key elements of game design and give plenty of hands on time so that they can be confident using Kodu Game Lab as they progress. No previous game programming experience is needed!
0.8235
FineWeb
``` [ "Introduction to Game Programming", "Kodu Game Lab", "Game Design and Development" ] ```
Classification and Explanatory Rules of Harmonic DataFaculty of Engineering - Papers (Archive) AbstractClustering is an important technique in data mining and machine learning in which underlying and meaningful groups of data are discovered. One of the paramount issues in clustering process is to discover the natural groups in the data set. A method based on the minimum message length (MML) has been developed to determine the optimum number of clusters (or mixture model size) in a power quality data set from an actual harmonic monitoring system in a distribution system in Australia. Once the optimum number of clusters is determined, a supervised learning algorithm, C5.0, is used to uncover the fundamental defining factors that differentiate the various clusters from each other. This allows for explanatory rules of each cluster in the harmonic data to be defined. These rules can then be utilised to predict which cluster any new observed data may best be described. Link to publisher version (URL)IEEE Xplore Citation InformationAli Asheibi, David Stirling and Danny Soetanto. "Classification and Explanatory Rules of Harmonic Data" (2008) Available at: http://works.bepress.com/dstirling/36/
0.9053
FineWeb
["Clustering", "Harmonic Data", "Machine Learning"]
Creation Date: Apr 10, 2011 Published In: Apr 2011 Paper Type: Journal Article Book Title: IEEE Transactions on Information TheoryAbstract: In a wireless network with a single source and a single destination and an arbitrary number of relay nodes, what is the maximum rate of information flow achievable? We make progress on this long standing problem through a two-step approach. First, we propose a deterministic channel model which captures the key wireless properties of signal strength, broadcast and superposition. We obtain an exact characterization of the capacity of a network with nodes connected by such deterministic channels. This result is a natural generalization of the celebrated max-flow min-cut theorem for wired networks. Second, we use the insights obtained from the deterministic analysis to design a new quantize-map-and-forward scheme for Gaussian networks. In this scheme, each relay quantizes the received signal at the noise level and maps it to a random Gaussian codeword for forwarding, and the final destination decodes the source's message based on the received signal. We show that, in contrast to existing schemes, this scheme can achieve the cut-set upper bound to within a gap which is independent of the channel parameters. In the case of the relay channel with a single relay as well as the two-relay Gaussian diamond network, the gap is 1 bit/s/Hz. Moreover, the scheme is universal in the sense that the relays need no knowledge of the values of the channel parameters to (approximately) achieve the rate supportable by the network. We also present extensions of the results to multicast networks, half-duplex networks, and ergodic networks. IEEE Explore link: http://dx.doi.org/10.1109/TIT.2011.2110110
0.934
FineWeb
["Wireless Networks", "Information Theory", "Network Capacity"]
This release is available in German. SARS, avian flu, Ebola outbreaks of deadly viral infections are becoming increasingly frequent. And we still don't have vaccines for many of the pathogens responsible. One of the most dangerous classes of viral diseases is the zoonosis, which can be transmitted from animals to humans with sometimes fatal consequences. One of these is caused by the West Nile virus (WNV), which was first identified in Uganda in 1937. The virus was carried to the United States in 1999 and had spread through the whole of North America within five years. There is now a risk that it will propagate worldwide. Since its first appearance in the United States, around 400 people have died there after coming into contact with the West Nile virus. A new vaccine promises to provide protection. Scientists at the Fraunhofer Institute for Cell Therapy and Immunology IZI in Leipzig have developed the DNA vaccine. "In this type of vaccine, DNA molecules known as plasmids extracted from the pathogen are used for inoculation, instead of the whole virus. They contain the genetic code for the antigens that stimulate the body to produce antibodies. We can thus replicate the virus's natural infection route without actually triggering the disease," explains Dr. Matthias Giese, the IZI's head of vaccine development. Conventional methods of vaccination involve injecting a dead or weakened form of the pathogen into the patient's body, which responds by producing the corresponding antibodies and developing immunity to the disease. An alternative is to inject a serum that already contains these antibodies. Such vaccines are merely preventive. By contrast with live vaccines, which carry a risk of provoking the disease, DNA vaccines are absolutely biologically safe. Moreover, they activate all existing defense mechanisms in the body, are cheap to produce and can be stored without a refrigerator which makes them ideal for use in subtropical and tropical climates. "Since the human immune system is very similar to that of other mammals, we are developing a cross-species vaccine for use in both veterinary and human medicine. And unlike conventional vaccines, DNA vaccines can be used both as prophylactics and as therapeutics, i.e. in cases where the disease is already present," says Dr. Matthias Giese, citing the further benefits. The WNV vaccine has already passed initial tests. Giese expects the laboratory research to be completed by the end of 2009. After that, another 3 years or so will be needed for the approval procedure including clinical trials. Then, it is hoped, the world's first therapeutic WNV vaccine will be ready for market. |Contact: Dr. Matthias Giese|
0.7673
FineWeb
``` { "topics": [ "West Nile Virus", "DNA Vaccines", "Viral Diseases" ] } ```
By Richard L. Reece Read or Download The Health Reform Maze: A Blueprint for Physician Practices PDF Similar medical administration & policy books Nearly eighty million adults within the usa have low future health literacy - an individual's skill to procure, strategy, and comprehend uncomplicated overall healthiness details. Low health and wellbeing literacy creates problems in speaking with clinicians, poses boundaries in coping with persistent disorder, lessens the chance of receiving preventive care, heightens the potential for experiencing critical medicine blunders, elevated hazard of hospitalization, and ends up in poorer caliber of existence. Aufgrund der demographischen und ökonomischen Entwicklung erlangt die Behandlung geriatrischer Patienten steigende Bedeutung. Am Beispiel der Zentralen Notfallaufnahme am Klinikum Klagenfurt wird ein Modell für eine spezifische geriatrische Notfallversorgung präsentiert, die hochbetagten multimorbiden Patienten gerecht wird. Contemporary experiences into the reports and screw ups of well-being care companies, in addition to the fast improvement of sufferer advocacy, consumerism and strain teams have led historians and social scientists to have interaction with the problem of the clinical criticism. As expressions of dissatisfaction, disquiet and failings in provider provision, previous complaining is a crucial antidote to innovative histories of future health care. Medical accountability offers useful felony suggestion by means of highlighting real-life healthcare case stories and office examples. It appears to be like in particular on the components of responsibility, the felony strategy, what constitutes a felony declare introduced via the sufferer, and the obligation of care and the way it's measured. - Understanding Potential Changes to the Veterans Equitable Resource Allocation System: A Regression--Based Approached: Resource Allocation (VERA) System - A Regression-Based Approach - Krisenkommunikation - Grundlagen und Praxis: Eine Einführung mit ergänzender Fallstudie am Beispiel Krankenhaus (German Edition) - Global Health Risk Framework: Resilient and Sustainable Health Systems to Respond to Global Infectious Disease Outbreaks: Workshop Summary - An Introduction to Public Health - Annual Review of Health Care Management: Revisiting the Evolution of Health Systems Organization: 15 (Advances in Health Care Management) Extra resources for The Health Reform Maze: A Blueprint for Physician Practices The Health Reform Maze: A Blueprint for Physician Practices by Richard L. Reece
0.5901
FineWeb
``` [ "Health Literacy", "Medical Administration", "Health Care Policy" ] ```
Full Title: Endless Ocean Endless Sky Set Design Location: Portland, OR Description: This project was commissioned by emerging choreographer Tahni Holt for her performance Endless Ocean Endless Sky. The set was designed in response to several design criteria relating to both the evocation of the choreographic aesthetic and the limits of financial and logistical constraints. From the very beginning of the design phase, we were interested in creating a minimal set that could be built and transported easily that was also able to create an evocative and mutable space for both the performers and audience. Relying heavily on the work of Ant Farm and their inflatable constructions in the 1970’s, a small 20′x40′ space was made using standard polyethylene. Additional seams and creases were welded into the plastic in order to avoid the typical balloon aesthetic of inflatables. Rather, there was a desire for the installation to be able to evoke both things that were simultaneously heavy and light (massive icebergs floating in the sea, 747’s flying through the sky, etc.). A generative algorithm was developed that would allow a fragmented pattern of creases to emerge on the surface without having to laboriously transcribe a predefined pattern on to the surface. Credits: Andrew Kudless with help from Ronnie Parsons and Chris Walker. One Response to “EOES”
0.7687
FineWeb
```json [ "Set Design", "Performance Art", "Installation Design" ] ```
Phenibut: What is Phenibut? Phenibut: What is it? Phenibut was first discovered during the 1960s in Soviet Union (Shulgina, 1986). Since it was discovered, it has been used for treatment of mental and emotional ailments. Phenibut is a type of chemical that is similar to a human’s brain chemical. It is known as gamma-aminobutyric acid or GABA for short (webmd.com). Phenibut normally is used for anxiety treatment. It is also being used for treatment of insomnia, fear, stress, and even tension. Additionally, it is allowed to cross the human’s blood-brain obstruction through phenyl ring. Many people experience different types of emotional or mental strain that made them use Phenibut as part of their treatment process. However, this chemical is more of a nutritional supplement, which is still not approved in in the United States as one of the pharmaceutical products. On the other hand, in Russia, Phenibut is being purchased as psychotropic medicine. In fact, it is known to have the ability when it comes to neurological function’s improvements even though some medical researchers are still not able to notice these outcomes. How Phenibut works Since Phenibut is similar to a human brain’s chemical, it works almost the same thing as natural. Even though it has not been fully tested and studied on humans, the results of the research on animals show that it can reduce anxiety (Medicinenet). In addition, it also has some effects on the human body. However, since there is still no concrete evidence of its effects, the information may have not been enough to conclude that it provides benefits to humans. There have been claims from various patients, saying that Phenibut helped them with their anxiety and insomnia, but these reports were not academically published to serve its therapeutic effects on humans. What is Phenibut used for? Due to different circumstances and experiences, many people suffer from mental and emotional ailments. As a result, stress and anxiety occurred, which normally affects the person’s sleeping habits. During this stage, Phenibut is being used to help a person regain his or her sleeping habits such as fighting the symptoms of insomnia. For instance, Phenibut HCL is a natural nootropic, which is known for improving sleep patterns. It also helps in reduction of anxiety and boosts cognitive performance. Based on its usage, Phenibut is ideal for people who suffer from severe stress that result to anxiety and sleep deprivation. Since it has a nootropic stack’s component, which is designed to fight against stress, it can enhance a person’s mood. Sometimes, people use this drug as anxiolytic, especially to reduce social anxiety. Thus, it also serves as an anti-depressant for others. Phenibut: Usage and Dosage It is important to know if a certain drug works quickly. Therefore, people also need to know its right dosage and usage in order to achieve its safe use and avoid any possible complications. In the case of Phenibut, there are ideal dosages that can be followed especially if it is taken privately. For male, the ideal dosage is between fifteen and seventeen milligrams per pound of his whole body weight (corpina.org). However, in some cases, dosages may vary depending on his health. For example, a healthy male in his early twenties or late teens may take about 2.5 grams of Phenibut, which must be subdivided up to three different doses in one day. On the other hand, female patients have a different recommended dosage for daily use. The ideal is between eight and ten milligrams per pound of her whole body weight (corpina.org). Same thing with male, dosage may also vary depending on female’s health. For example, a healthy female within the equivalent age, may take one gram as a dose for the day, which can furthermore be subdivided up to three usages within one day. *Most nootropic users generally take anywhere from 500mg to 1000mg of Phenibut per day.* Phenibut: Short/Long term effects and known side effects Every drug has its own negative and positive effects. For Phenibut, it provides both the long and short-term effects. One of the known short-term effects and as mentioned earlier is its ability to help a person who suffers stress due to different activities involving his or her daily life. In general, Phenibut could not only provide psychological benefits, but it also adds to a person’s overall benefits in terms of nootropic stack through promoting sleep and reduction of corticosteroids. It in addition fights the unwanted effects of stress by putting a person on his relaxed mind. On the other hand, one of the long-term effects of Phenibut is the person’s tolerance for this drug. As the person continuously uses it, his or her tolerance increases, which may require him to increase the dosage eventually. As such, the increased tolerance could affect his dependence on Phenibut. That is why; it is advisable to temporarily stop using it for a few days in order to lower the person’s tolerance. Continuous intake of Phenibut could feel slight numbness and limb pain after hours of sleep. However, such incidents are rarely reported as the effects can be reduced through reduction of dosage.
0.8149
FineWeb
["What is Phenibut?", "How Phenibut works", "What is Phenibut used for?"]
Which of the tools for decision making (quantitative methods, qualitative methods, and triangulation methods) should be used in developing operating procedures? Justify and explain your response.© BrainMass Inc. brainmass.com December 20, 2018, 5:19 am ad1c9bdddf The triangulation method is the best tool to use when it comes to decision making for developing operating procedures. This is because a person can use both quantiative and qualitative in order to keep ... This solution provided ways in which certain tools are used in developing operating procedures, such as quantitative, qualitative, or triangulation methods.
0.8353
FineWeb
["Decision Making Tools", "Operating Procedures", "Triangulation Methods"]
The ten quilts in this guide suggest the range of the many styles, influences, and materials found within African American quiltmaking traditions. The quilts have many stories to tell of artistic innovation, triumph over hardship, and pride in heritage. It is important to note that these quilts are a small sampling of a much larger production, for many quilts have been lost to history. Each quilt is a product of its own particular social, historical, and personal context. For this reason, the text prioritizes the quiltmakers’ own words, biographical information, and descriptions of their working methods. The resources listed below can be used to introduce the material to K–12 students as pre- or post-visit lessons, or instead of a Museum visit. - Information about ten quilts and the artists who made them - Language arts, social studies, math, and art curriculum connections - A selected chronology - A resource list for further study - A vocabulary list, which includes all words that have been bolded in the text Note: The quotes from the artists featured in this guide were taken from personal interviews and therefore reflect the informality of that form of communication. As you read the quotes, listen for the richness of the spoken word and the rhythms that characterize the dialect of the American South. - Most quilts are made of three layers: a top that is decorative, a middle of soft batting that adds thickness and provides warmth, and a back. - These three layers are stitched, or quilted, together. - The quilts included in this guide fall into two categories: pieced and appliqué. Pieced quilts have a top made of bits of fabric that are stitched, or pieced, together. Appliqué quilts have tops that consist of background blocks of fabric with cutout shapes of fabric sewn on top.
0.9976
FineWeb
["African American Quiltmaking Traditions", "Quilt Construction and Materials", "Quilt Types and Designs"]
David Maister just posted something very interesting: a question. "What are the top 10 bad habits partners within professional services firms display?" That's easy. They're not proactive. Steven Covey pointed out the importance of being proactive in his landmark book The Seven Habits of Highly Effective People. He listed proactivity first. He was right. Marketing and business development are, ultimately, all about creating a vision of the business you want to have, and then reaching out and doing what it takes to make that vision real. In my line of work, this means engaging with your market. It means calling people you don't already know, making presentations with no guarantee of success, and so on. It's about taking initiatives. Professionals are often very risk-averse people. They're used to solving complex problems within a bounded system. There are certain rules, and processes, and there isn't any model for trying something new, or even trying something at all. So they don't. Or, to use a metaphor, they tend not to be the cue ball, but instead, the ball the cue ball is aimed at. This is fatal. If I could boil everything I know and teach down into one concept it would be: make something happen. Mr. Covey is right. Do something.
0.7159
FineWeb
["Proactivity in Business", "Marketing and Business Development", "Overcoming Risk Aversion"]
Barking is a very normal behavior of dogs as it is their method of interacting with their human buddy and other animals. Pet dog frequently bark to express their feelings to their human buddy. So, periodically even a effectively acted pet dog can begin barking continually. There are a lot of reasons that could trigger a pet to bark and their barking can be very irritating. Dogs typically bark to alert their human companion of a suspicious activity or a suspicious person. Pet dogs might also bark due to the fact that they do not trust odd individuals or they are unpleasant with the presence of weird people in their home. Some pet dogs are used by their human buddy to warn them of burglars and any other kinds of risks. But in some cases perpetual barking can be exceptionally bothersome and disturbing since if your dog is barking all through the night, it can have an unfavorable effect on individuals residing in the exact same home as the dog and their next-door neighbors. A canine that is barking continuously can trigger a considerable boost in the stress levels of the people in the pet's environment, this can result in loss of sleep. One of the most popular methods of controlling barking problems in pet dogs is making use of bark collars. Bark collars is one of the most tender methods of managing incessant barking in pet dogs. It is likewise an effective way of controlling barking issues in pets. A bark collar is also an efficient method of training a pet, although it is not as widely used as favorable support. KINDS OF BARK COLLARS There are many types of dog collars in the market today, but the three most common kinds of dog collars consist of; 1. ELECTRIC SHOCK BARK COLLARS COLLARS The electric shock bark collars are one of the oldest type of bark collars on the marketplace. They work by offering the pet a moderate static electricity once the vibrations from its vocal cords and barking sounds are discovered by the sensing units of the bark collars. The electrical shock dog collars sends out electrical present to pet dog's muscles whenever he barks. The integrated prongs is normally utilized to provide the mild electric shock which can adjusted in intensity according to the size of the dog and the training requirements. However the exact method that each individual electrical shock bark collars works depends upon the design and the producer of the electric shock bark collar. Many kinds of electric shock bark collars emit a moderate electrical present when it senses that the my review here canine is barking. Some people state that the electrical shock bark collars are harsh because of the mild electric current that is sent out to the pet dog's body but it is not likely damage your pet dog in any way. The electric shock bark collars look what i found is frequently used to teach a pet to only bark when it is essential and not simply barking nonstop. People that are utilizing the electrical shock bark collar for the first time need to begin with a low level setting and increase it slowly as the scenario demands. Making use of electrical shock bark collars is controversial, however it is very effective for controlling incessant pet dog barking as long as the user has sufficient understanding about it. And also prevent setting the intensity of the electrical shocks too high however rather set it to an intensity that is simply unpleasant adequate to discourage the pet from barking. Also some electrical shock bark collars make use microphone to sense when the pet dog is barking. Some collars however utilize both vibration sensing units and microphones to notice when the pet is barking. 2. ULTRASONIC BARK COLLARS The ultrasonic bark collars are one of the most popular bark collars in the market today. This kind of bark collars are used to dissuade pets from barking by utilizing an frustrating yet completely safe noise. Ultrasonic bark collars often work by releasing a high pitched ultrasonic noise that is stated to be irritating to dissuade the pet dog from barking. They are also one of the most effective method of controlling a pet dog's relentless barking. The bark collar gives off a sound when its microphone senses the dog's barking that surprises the pet dog without look what i found leading to any sort of physical damage. The microphones in the ultrasonic bark collars are developed to sense distinct barking, which stops the high pitched ultrasonic sound from accidentally going off as a result of other noises. The high pitched ultrasonic sound given off by the ultrasonic bark collars can not be heard by people however they can be heard by pets, and it just lasts for a couple of seconds. Current research studies have actually revealed that this kind of bark collars does not have any sort of unfavorable impact on the pet dog due to the fact that the duration of the high pitched ultrasonic sound is controlled. Numerous brands of ultrasonic bark collars have a favorable tone function that serves as a positive support when the pet dog stops barking. 3. CITRONELLA SPRAY BARK COLLARS The citronella spray bark collars work by spraying out citronella unto the canine's snout whenever the pet begins barking. The citronella spray typically go for the canine's sense for fast outcomes. The citronella spray bark collar are designed to make the dog link the smell of citronella with barking, so the pet will stop barking because he does not want smell the citronella that is frequently released as a consequence for barking. This is frequently reliable since the fragrance of citronella is very displeasing to pets however it is also absolutely safe. Citronella is typically used as an bug spray by people however it does not trouble humans. As soon as the pet dog starts barking, the microphone will sense the barking and the citronella spray bark collar will launch the harmless citronella scent. It does not need any human intervention to function and it has no negative impacts on the health of the dog, rather it has a calming result on the canine. It also make the most of pets extreme hatred of organic services. The quantity of citronella released need to simply be enough to stop the pet from barking. The only disadvantages of using the citronella spray bark collars is that the aroma of the citronella might adhere to the canine's body for lots of minutes or hours in some events.
0.5695
FineWeb
["Barking Behavior in Dogs", "Types of Bark Collars", "Bark Collar Effectiveness"]
A non-contact and simple interferometric technique has been used to measure the refractive index variation of gamma irradiated CR-39 polymer samples. Six samples of thickness (750 μm) and dimensions of 1.5 × 3 cm2, have been irradiated with gamma radiation doses in the range from zero to 600 kGy. It is observed that the refractive index increases by increasing the dose and reaches its maximum value at 40 kGy dose, and then starts to decrease again. This means that the gamma radiation deformation in the CR-39 polymer is a chain scission effect. This interferometric technique could make a contribution to recombination and degradation mechanisms on CR-39 polymer. Copyright © 1996 Elsevier Science Ltd. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
0.8341
FineWeb
["Refractive Index Variation", "Gamma Irradiation", "CR-39 Polymer"]
While outcomes are no longer a new idea or concept, less than 35% of the philanthropic sector is consistently defining, tracking, reporting or improving outcomes for those they serve. Why is that? Learn what you can do to shift your organizations’ culture from a service provider to an outcome achiever. - Participant Learning Outcomes: 300 characters - Engaging leadership to shift their mindset about outcomes - Getting everyone on the same page about what to measure - Cultivating and rewarding learning integration - Getting support to build performance management capacity - Communicating and negotiating with outcomes
0.9959
FineWeb
```json [ "Defining and Tracking Outcomes", "Shifting Organizational Culture", "Outcome Achiever Strategies" ] ```
Codruţa Alina POPESCU Second Psychiatric Clinic, Cluj-Napoca, Romania Ioana Valentina MICLUŢIA University of Medicine and Pharmacy “Iuliu Haţieganu”, Department of Psychiatry, Cluj-Napoca, Romania Background: Patients with schizophrenia exhibit various cognitive dysfunctions, most of them rendered evident by language. Objectives: The aims of the current study are: to compare the global semantic performance of schizophrenics with those of normal controls and to explore the schizophrenics’ semantic network. Method: 62 schizophrenic patients, admitted to the Second Psychiatric Clinic, diagnosed according to ICD-10 criteria and 158 healthy controls were evaluated with tasks for semantic fluency (animals, fruits, and body parts). Statistical analysis: The correlation between clinical symptoms, demographic data and the verbal fluency variables has been determined using Pearson’s correlations. Data were analysed using ANOVA and for semantic fluency this was followed by multidimensional scaling (MDS). Results: Patients with schizophrenia generated fewer words than healthy controls on semantic fluency tasks. The MDS analysis showed that the semantic structure for schizophrenics with hallucinations was more disorganized than that for schizophrenics without hallucinations. The study emphasized in the later subgroup a lack of any organisation or logical associations within their semantic network of animals, fruits or body parts. Conclusions: The comparison between schizophrenia patients and normal controls indicated impaired semantic structure in the patient group, in addition to decreased word production. Keywords: verbal fluency, schizophrenia, semantic store
0.6462
FineWeb
``` { "topics": [ "Schizophrenia", "Semantic Structure", "Verbal Fluency" ] } ```
MRI for Children: Best Practices for a Smooth Test Many adults get nervous when they face an MRI, even if they know how the machine works and what to expect. For children, the prospect of climbing or sliding into a large, imposing machine can be even more frightening. But an MRI will provide your child's doctor with important information that will lead to both a more accurate diagnosis and a more effective treatment plan. How MRIs Work The imaging begins after your child is positioned inside a tube-shaped chamber. The session is actually a series of smaller tests, usually ten minutes each, followed by a pause of a few more minutes. Sometimes, your child may need to hold his or her breath for a few seconds. You should let them know that the machine is noisy, and headphones are often used to help drown out the sound. An MRI uses magnetic fields and radio waves to produce highly accurate, three-dimensional images. Sometimes, the MRI requires a special dye to be injected through an IV in order to highlight particular areas. Though a very small number of children may have an allergic reaction to the dye, don't worry. Most facilities have trained staff at the ready if such reactions occur. MRIs detect inflammation, tumors, and infections, and they are especially useful for imaging the heart, eyes, ears, and soft tissue such as tendons and joints. A radiologist reads the results before sending a report to the pediatrician. Pediatric imaging centers specialize in MRI exams that will be as smooth a process as possible for your child. Child-friendly facilities feature MRI technicians with an expertise in pediatric imaging and a wealth of experience in working with kids. As an article in MAGNETOM World points out, no two children are the same, and it's important for the staff at the imaging center to be able to make decisions about the MRI based on your child's behaviors, fears, and needs. Perhaps more importantly, MRI scanners that cater to young children may be made to look like a spaceship or pirate ship, making them a far cry from the more typical white tube that adults are familiar with. Kids may even be able to watch a movie during the test. By making the test seem like a game or providing the proper distractions, the experience can be a lot smoother than you expected. Your child's age is a major factor in knowing what to expect during the test. Infants are tightly swaddled, while, for children who may be claustrophobic, some machines are built with wide openings, and others are not enclosed at all. The goal of these machines is to keep your child from being scared or frightened. Not only will this be a better experience all around, but for some children, it may mean the difference between requiring a sedative or simply lying still. Preparing Your Child You know your child better than anyone else, including what can help to distract or relax him or her during the test. For young children who won't be given anesthetics, consider taking the following actions before and during the MRI: - Take your child on a visit to the MRI center beforehand. Make arrangements to visit the room and even introduce your child to the staff. - Simulate the process at home, including pretending to inject the dye and practicing holding still. - If you know someone who has had an MRI, ask that person to describe the experience for your child. - Find out ahead of time if the physician will prescribe a mild sedative before the test. - Since it is sure to be detected by your child, control your own anxiety and be a model of calmness. - Older children and adolescents respond well if they are given some level of control over the process. Let them become a part of the team. Like many medical procedures, fear of the unknown can cause anxiety in young children. When you can give your child as clear of an idea as possible of what they will go through during an MRI, you eliminate surprises. Experienced technicians who are used to working with kids and a kid-friendly testing environment will all go a long way toward making this a smooth experience for parents and children alike. Posted in Bone and Joint Health More articles from this writer *This information is for educational purposes only and does not constitute health care advice. You should always seek the advice of your doctor or physician before making health care decisions.
0.8336
FineWeb
["MRI for Children", "How MRIs Work", "Preparing Your Child"]
FrameworkElement::BeginStoryboard Method (Storyboard) Begins the sequence of actions that are contained in the provided storyboard. Assembly: PresentationFramework (in PresentationFramework.dll) Most typical animation scenarios do not use this method. Typically, you create the Storyboard or BeginStoryboard element in markup, and then you place these as the EventTrigger content on an element. When triggered by the event, the animation then runs. Most of the control aspects of a Storyboard can be addressed by properties that are exposed in markup. For the signatures that do not use the isControllable, parameter, or when that parameter is specified false, the timeline clocks that are associated with the animation are removed as soon as the animation reaches the "Fill" period. Therefore the animation cannot be restarted after running once. Controlling an animation also requires that the storyboard have an x:Name Directive or be accessible by reference in code.
0.8654
FineWeb
``` [ "Storyboard Methods", "Animation Control", "EventTrigger" ] ```
Sign up for an Award Participate with your entry in one of our professional or student awards.Sign up for an award You are using an outdated browser. Please upgrade your browser to improve your experience. Every Process, which creates an image, leaves its traces of production. Various disciplines of science call those Interferences or Noise. ININVISIBLE visualizes the traces of imaging processes by isolating them from all other information. Various Images evolved, which communicate nothing but their traces of production. A new imaging process was designed to broach the issue of process from another point of view and to demonstrate the importance of traces of production: A plotting machine converted into a computer device capable to draw with various traditional analog techniques. This process represents images neither drawn by hand or computer Hochschule für Gestaltung Karlsruhe Dr. Wolfgang Ullrich, Ludger Pfanz, Tania Prill, Florian Pfeffer, Sven Voelker
0.5106
FineWeb
``` { "topics": [ "Awards", "Imaging Processes", "Computer-Generated Art" ] } ```
Have you heard the buzz about bees? Disturbing losses of bee colonies has become a matter of disturbing societal issues. In recent years, we learned about Colony Collapse Disorder (CCD), a mysterious and devastating loss of bee colonies in the U.S., Canada and Europe. The first reports of these unexplained and catastrophic bee deaths began in 2006. In the 2006-2007season, CCD affected about 23 percent of commercial U.S. beekeepers, and some beekeepers lost 90 percent of their hives. Since then, CCD has showed no signs of slowing; substantial yearly losses of bees, 30 percent or higher has become the norm.1-2 Answers started to surface in 2007. Scientists began to identify viruses in U.S. bee colonies that had suffered CCD.1 Soon, it was known that healthy and CCD-stricken colonies were plagued with numerous viruses and parasitic microbes, and seemed to have impaired ability to produce proteins that protect against infection.2-3 Scientists then began to ask whether there was an environmental factor that was causing the bees to be vulnerable to viral attack. In early 2012, two studies published in Science implicated a class of pesticides called neonicotinoids. Neonicotinoids are a class of neuro-active insecticides similar to nicotine. In these studies, bees exposed to neonicotinoids exhibited a reduced growth rate, produced fewer queens, or had impaired navigation and food-gathering abilities. The scientists concluded that neonicotinoids, although the commonly encountered doses may not be directly lethal to bees, could contribute to CCD in an indirect way, by harming bees’ abilities to grow, return home to their hives or get adequate nutrition.4-6 Now that several additional studies have found similar negative effects on bee behavior and cognition, evidence that neonicotinoids harm bees and are a major contributor to CCD has grown more convincing.7-9 Neonicotinoids began to be used in the 1990s, as less-toxic-to-humans alternatives to organochlorine and organophosphate pesticides. An important point about these pesticides is that they are usually used in a “systemic” manner; when crops are treated, the pesticides spread throughout all parts of the plant, including the nectar and pollen. Bees are exposed to these pesticides via many major commercial crops including canola, corn, cotton, sugar beet and sunflower; plus many vegetable and fruit crops.5-6,10 The pesticide industry and some scientists claim that the evidence against neonicotinoids is not yet conclusive, but it has been convincing enough for some agencies to propose bans on these pesticides as a safety measure. The European Food Safety Authority, for example, produced a report in January 2013 concluding that neonicotinoids pose unacceptable risks for bees and should not be applied to flowering crops. As a result, a two-year suspension was proposed in the European Union, and was passed in late April—it went into effect December 1st.11-12 Currently, France and Germany have partial bans on neonicotinoid use.13 In March 2013, a coalition of beekeepers and environmental interest groups filed a lawsuit against the U.S. Environmental Protection Agency, alleging that they have failed to protect bees and the crops they pollinate by rushing neonicotinoids to market with inadequate review. The USDA and EPA released a joint report on U.S. honeybee health, stating that multiple factors contribute to bee colony declines, and that further research is required to determine the risks posed by pesticides. The report does acknowledge, “Laboratory tests on individual honey bees have shown that field-relevant, sub lethal doses of some pesticides have effects on bee behavior and susceptibility to disease.” The dispute over the threat to bees posed by the class of neonicotinoids, took a dramatic new turn on September 10, 2015, when the Ninth Circuit Court of Appeals overturned federal approval for a new formulation called sulfoxaflor. Judges found that the Environmental Protection Agency (EPA) had relied on “flawed and limited” data, and its green light was unjustified given the “precariousness of bee populations”. As a result of the US decisions, rules on the controversial chemicals in the US are in bizarre contradiction. The US has approved most neonicotinoids while now banning sulfoxaflor. This is a sincere emergency to our organic farming movement and to the global food supply, to lose the natural way flowering plants are pollinated. Bees are crucial for pollination of many crops such as apples, almonds, and citrus fruits. According to the U.N., about 70 percent of the crops that provide 90 percent of human food is pollinated by bees.14 We are dependent on bees, and they are disappearing rapidly. It is alarming to say the least. You can take action at home. Since wild bee populations are also declining, in part due to loss of habitat, you can help by providing bees with new habitats. You can plant a garden of vegetables and plenty of bee-friendly flowers, or even become a backyard beekeeper. Additionally, by purchasing local and/or organic produce and eating primarily unrefined plant foods, you avoid monetarily supporting the largely genetically modified crops (corn, canola, sugar beets, etc.) that neonicotinoids are primarily used on.
0.7678
FineWeb
["Bee Colony Collapse", "Neonicotinoids", "Pollination and Food Supply"]
We all know that the first years of our children’s lives are the most critical to development. Tennessee continues to add more pre-kindergarten classes to its public schools. Its future goal is to have a preschool program that is available to all children. Massachusetts is also among several other states that are reforming education to include adequate pre-kindergarten and full day kindergarten classes for all children. However, not only educators are supporting early learning. Pediatricians alike will agree that early intervention is important. Dr. Sean Palfrey, former president of the Massachusetts Chapter of the American Academy of Pediatrics, also agrees with early education for all children. A quote from him sums up his compete attitude toward early education. “High-quality early education is one of the most effective prescriptions Massachusetts pediatricians know of to accomplish this goal (a child succeeding in life).” For the first three years of a child’s life, pediatricians focus on growth, weight, health, and proper vaccines. By the age of three, their focus begins to turn to behavior and developmental. Pediatricians will begin asking questions about the child’s knowledge. They question about the child’s play, recognition of colors and numbers, and fine motor skills. Between the ages of two and six, a child’s mind takes in every piece of information that it can get its hands on. The mind of a child this age absorbs information, processes language, and learns behavior faster than you and I are capable of doing. The self-confidence, patterns, and behavior that a child develops will remain with him or her throughout his or her life. Much research has shown us that the information a child learns in the first six years will be the foundation for creating that child’s strengths and weaknesses. For this reason, more and more focus is being placed on early education programs. States are spending millions of dollars to upgrade and intensify their preschools and kindergartens. It is important that children enter school with a background that allows them to succeed.
0.8044
FineWeb
null
Help your child deal with feelings about the diagnosis of Chronic Granulomatous Disease (CGD). Try to understand the many emotions that children experience regarding CGD. You can help your child cope with difficult emotions by talking openly about how everyone in the family may be experiencing something similar. Providing routine and predictable times to check in with your child gives them opportunities to talk and to share, and it gives you opportunities to reassure them that their feelings are normal and acceptable. You can ask questions in a way to get your child talking by using open-ended questions. “What kind of questions do you have?” is very different than “Do you have any questions?” You can also ask questions about specific behavior: “Lately, you have been getting angry about things that do not normally bother you. Why do you think that is?” Finally, provide ways to help your child get rid of unhappy feelings. Some examples include using play or art to express feelings. Give your child some choices. Many children living with CGD tend to think they have little control over their lives. Children need opportunities to make choices—to have power over any part of their lives they can control. This can be done by offering the child choices whenever possible, such as what they would like for dinner, what activity they would like to do that day. Prepare your child for the reactions of others. Children with CGD often do not know how or what to tell others about their illness and symptoms, particularly because many children with CGD can appear to be healthy. You can help by teaching your child a simple and short explanation of the diagnosis. Make sure your child is comfortable explaining what is necessary to keep well. It may help for you and your child to role-play examples of how to answer questions that others might ask and to handle any teasing that might occur. Be sure to include siblings in these discussions as well, as they often experience similar situations with their peers. Look for role models. Although they may appear to be as healthy as other kids, children with CGD may feel different. Being around others with the same diagnosis can often help them in this regard. The Immune Deficiency Foundation (IDF) offers many ways for children and families to interact throughout the year, including family retreat weekends, patient education meetings and a national conference held every other year. You can share and ask questions on IDF’s social network, IDF Friends, www.idffriends.org. You can ask IDF to connect you with a trained peer support volunteer that has experience living with a child who has CGD. CGD can affect your family in many ways. After diagnosis, you may experience increased worry, stress and problems with sleep or appetite, sadness, and anger. Parents may have less time for each other and for social activities they once enjoyed. Planning for fun times may be difficult due to the unpredictability of the child’s illness. And, even though children with CGD can go for long periods of time without having an infection, concerns about CGD are always there. Siblings also may experience a wide range of emotions when their brother or sister is living with CGD. These emotions often include anger, guilt, embarrassment, sadness, loneliness, fear and confusion. Siblings may also experience jealousy if they receive less attention. It is important to talk with children about their feelings and not to simply dismiss them thinking they will “get over it” on their own. Families can benefit from strategies that help them to relieve stress, share responsibilities, gain support and explore emotional worries. Approaches include: Help your child lead as normal a life as possible. To whatever extent possible, you should try to treat your child with CGD just like any other child. At the same time, you need to take into consideration your child’s health and the special needs that they have. This can be quite a balancing act, but it is important for parents to encourage their child’s participation in activities that involve other children of the same age. Help your other children cope. A child living with CGD demands a lot of parental attention. It is no wonder that brothers and sisters often feel jealous, angry, and lonely; they may also worry about their sibling and sometimes about their parents. They also might worry that they might get CGD. You should explain the condition to your other children. Try to get them to ask questions and to express their concerns. Parents need to keep open lines of communication with all of their children. It often helps children feel like an important member of the family if they can have a part in caring for their sibling in some way. It is important for parents to spend individual quality time with each child, letting each of them know how much they are loved, valued and appreciated. Make having fun together as a family a priority. Living with CGD may cause the whole family to be under increased stress at times. Getting support from each other may be harder during times of stress, but it is also even more important. Spend time together that is not focused on the condition and make it a priority to carve out time for whole family activities. It is equally as important to have special alone time just for parents and even for one-on-one parent-child dates, as mentioned earlier—each parent spending individual time with each child. For more information about programs and resources for parents and children, contact IDF via Ask IDF or 800-296-4433.
0.7636
FineWeb
["Helping Your Child Cope with CGD", "Supporting Siblings of Children with CGD", "Managing Family Stress and Emotions"]
View A Larger Image Of The Leslie Dame Library Style Multimedia Storage Cabinet (Dark Oak ( Cd Storage Cabinet #3) Hi there, this blog post is about View A Larger Image Of The Leslie Dame Library Style Multimedia Storage Cabinet (Dark Oak ( Cd Storage Cabinet #3). It is a image/jpeg and the resolution of this photo is 768 x 768. It's file size is just 57 KB. Wether You want to download This blog post to Your computer, you might Click here. You might too download more photos by clicking the following picture or see more at here: Cd Storage Cabinet. View A Larger Image Of The Leslie Dame Library Style Multimedia Storage Cabinet (Dark Oak ( Cd Storage Cabinet #3) Pictures Album There have already been forms and various sorts of View A Larger Image Of The Leslie Dame Library Style Multimedia Storage Cabinet (Dark Oak ( Cd Storage Cabinet #3) that are marketed etc the marketplace. Nevertheless, if your requirements are not matched by the cabinets while in the home inside the kind to ensure that hasbeen in the marketplace, guide oneself from your manufacturers or merchants would be the easiest way. You need to be certain to pay for awareness of the budget that you have produced. If you find a budget exceeds the control, it is possible to select cupboards in the kitchen that can be built to cut back the budget. Like, handle made from nickel on the doorways of the kitchen units can give a classic look, as the handle bronze provide a contemporary touch, and handle opera is the better selection for a bright look, or it is possible to pick a classy fashion using gem content so as to produce the kitchen in your home may look more attractive and sophisticated experience. The kitchen units are constructed can give the identical result from the drawer assembly place but having a cheaper cost, make sure to make every one of the required gear along with a guidebook to exhibit how-to assemble kitchen units. The last details might appear straightforward, nevertheless it presents an ingredient that is very efficient to display View A Larger Image Of The Leslie Dame Library Style Multimedia Storage Cabinet (Dark Oak ( Cd Storage Cabinet #3). Select the handle and penis is best for cupboards inside your kitchen's design and style. You've a number of components to pick from. Viewview (vyo̅o̅),USA pronunciation n. - an instance of seeing or beholding; - range of sight or vision: Several running deer came into the view of the hunters. - a sight or prospect of a landscape, the sea, etc.: His apartment affords a view of the park. - a picture or photograph of something: The postcard bears a view of Vesuvius. - a particular manner of looking at something: From a practical view, the situation presents several problems. - contemplation or consideration of a matter with reference to action: a project in view. - aim, intention, or purpose. expectation: the view for the future. - a sight afforded of something from a position stated or qualified: a bird's-eye view. - a general account or description of a subject. - a conception of a thing; theory: His view was not supported by the facts. - a survey; inspection: a view of Restoration comedy. - in view: - within range of vision. - under consideration. - as an end sought: She went over the material with the scholarship examination in view. - in view of, in consideration of; on account of: In view of the circumstances, it seems best to wait until tomorrow. - on view, in a place for public inspection; on exhibition: The latest models of automobiles are now on view. - with a view to: - with the aim or intention of. - with the expectation or hope of: They saved their money with a view to being able to buy a house someday. - to see; watch: to view a movie. - to look at; inspect: to view the construction of a road. - to contemplate mentally; consider: to view the repercussions of a decision. - to regard in a particular light or as specified: She views every minor setback as a disaster. - [Fox Hunting.]to sight (a fox). Imageim•age (im′ij),USA pronunciation n., v., -aged, -ag•ing. - a physical likeness or representation of a person, animal, or thing, photographed, painted, sculptured, or otherwise made visible. - an optical counterpart or appearance of an object, as is produced by reflection from a mirror, refraction by a lens, or the passage of luminous rays through a small aperture and their reception on a surface. - a mental representation; - a mental representation of something previously perceived, in the absence of the original stimulus. semblance: We are all created in God's image. copy: That child is the image of his mother. - a symbol; - the general or public perception of a company, public figure, etc., esp. as achieved by careful calculation aimed at creating widespread goodwill. - a type; embodiment: Red-faced and angry, he was the image of frustration. - a description of something in speech or writing: Keats created some of the most beautiful images in the language. - a figure of speech, esp. a metaphor or a simile. - an idol or representation of a deity: They knelt down before graven images. - the point or set of points in the range corresponding to a designated point in the domain of a given function. - [Archaic.]an illusion or apparition. - to picture or represent in the mind; - to make an image of; portray in sculpture, painting, etc. - to project (photographs, film, etc.) on a surface: Familiar scenes were imaged on the screen. - to reflect the likeness of; - to set forth in speech or writing; - to symbolize; - to resemble. - [Informal.]to create an image for (a company, public figure, etc.): The candidate had to be imaged before being put on the campaign trail. - to transform (data) into an exact replica in a different form, as changing digital data to pixels for display on a CRT or representing a medical scan of a body part in digital form. Ofof1 (uv, ov; unstressed əv or, esp. before consonants, ə),USA pronunciation prep. - (used to indicate distance or direction from, separation, deprivation, etc.): within a mile of the church; south of Omaha; to be robbed of one's money. - (used to indicate derivation, origin, or source): a man of good family; the plays of Shakespeare; a piece of cake. - (used to indicate cause, motive, occasion, or reason): to die of hunger. - (used to indicate material, component parts, substance, or contents): a dress of silk; a book of poems; a package of cheese. - (used to indicate apposition or identity): Is that idiot of a salesman calling again? - (used to indicate specific identity or a particular item within a category): the city of Chicago; thoughts of love. - (used to indicate possession, connection, or association): the king of France; the property of the church. - (used to indicate inclusion in a number, class, or whole): one of us. - (used to indicate the objective relation, the object of the action noted by the preceding noun or the application of a verb or adjective): the ringing of bells; He writes her of home; I'm tired of working. - (used to indicate reference or respect): There is talk of peace. - (used to indicate qualities or attributes): an ambassador of remarkable tact. - (used to indicate a specified time): They arrived of an evening. - [Chiefly Northern U.S.]before the hour of; until: twenty minutes of five. - on the part of: It was very mean of you to laugh at me. - in respect to: fleet of foot. - set aside for or devoted to: a minute of prayer. - [Archaic.]by: consumed of worms. Thethe1 (stressed ᵺē; unstressed before a consonant ᵺə; unstressed before a vowel ᵺē),USA pronunciation definite article. - (used, esp. before a noun, with a specifying or particularizing effect, as opposed to the indefinite or generalizing force of the indefinite article a or an): the book you gave me; Come into the house. - (used to mark a proper noun, natural phenomenon, ship, building, time, point of the compass, branch of endeavor, or field of study as something well-known or unique):the sun; the past; the West. - (used with or as part of a title): the Duke of Wellington; the Reverend John Smith. - (used to mark a noun as indicating the best-known, most approved, most important, most satisfying, etc.): the skiing center of the U.S.; If you're going to work hard, now is the time. - (used to mark a noun as being used generically): The dog is a quadruped. - (used in place of a possessive pronoun, to note a part of the body or a personal belonging): He won't be able to play football until the leg mends. - (used before adjectives that are used substantively, to note an individual, a class or number of individuals, or an abstract idea): to visit the sick; from the sublime to the ridiculous. - (used before a modifying adjective to specify or limit its modifying effect): He took the wrong road and drove miles out of his way. - (used to indicate one particular decade of a lifetime or of a century): the sixties; the gay nineties. - (one of many of a class or type, as of a manufactured item, as opposed to an individual one): Did you listen to the radio last night? - enough: He saved until he had the money for a new car. She didn't have the courage to leave. - (used distributively, to note any one separately) for, to, or in each; a or an: at one dollar the pound. Stylestyle (stīl),USA pronunciation n., v. styled, styl•ing. - a particular kind, sort, or type, as with reference to form, appearance, or character: the baroque style; The style of the house was too austere for their liking. - a particular, distinctive, or characteristic mode of action or manner of acting: They do these things in a grand style. - a mode of living, as with respect to expense or display. - an elegant, fashionable, or luxurious mode of living: to live in style. - a mode of fashion, as in dress, esp. good or approved fashion; - the mode of expressing thought in writing or speaking by selecting and arranging words, considered with respect to clearness, effectiveness, euphony, or the like, that is characteristic of a group, period, person, personality, etc.: to write in the style of Faulkner; a familiar style; a pompous, pedantic style. - those components or features of a literary composition that have to do with the form of expression rather than the content of the thought expressed: His writing is all style and no substance. - manner or tone adopted in discourse or conversation: a patronizing style of addressing others. - a particular, distinctive, or characteristic mode or form of construction or execution in any art or work: Her painting is beginning to show a personal style. - a descriptive or distinguishing appellation, esp. a legal, official, or recognized title: a firm trading under the style of Smith, Jones, & Co. - stylus (defs. 1, 2). - the gnomon of a sundial. - a method of reckoning time. Cf. New Style, old style (def. 2). - a small, pointed process or part. - a narrow, usually cylindrical and more or less filiform extension of the pistil, which, when present, bears the stigma at its apex. See diag. under flower. - the rules or customs of typography, punctuation, spelling, and related matters used by a newspaper, magazine, publishing house, etc., or in a specific publication. - go out of style, to become unfashionable: The jacket he's wearing went out of style ten years ago. - in style, fashionable. - to call by a given title or appellation; call: The pope is styled His or Your Holiness. - to design or arrange in accordance with a given or new style: to style an evening dress; to style one's hair. - to bring into conformity with a specific style or give a specific style to: Please style this manuscript. - to do decorative work with a style or stylus. Storagestor•age (stôr′ij, stōr′-),USA pronunciation n. - the act of storing; state or fact of being stored: All my furniture is in storage. - capacity or space for storing. - a place, as a room or building, for storing. - memory (def. 11). - the price charged for storing goods. Cabinetcab•i•net (kab′ə nit),USA pronunciation n. - a piece of furniture with shelves, drawers, etc., for holding or displaying items: a curio cabinet; a file cabinet. - a wall cupboard used for storage, as of kitchen utensils or toilet articles: a kitchen cabinet; a medicine cabinet. - a piece of furniture containing a radio or television set, usually standing on the floor and often having a record player or a place for phonograph records. - (often cap.) a council advising a president, sovereign, etc., esp. the group of ministers or executives responsible for the government of a nation. - (often cap.) (in the U.S.) an advisory body to the president, consisting of the heads of the 13 executive departments of the federal government. - a small case with compartments for valuables or other small objects. - a small chamber or booth for special use, esp. a shower stall. - a private room. - a room set aside for the exhibition of small works of art or objets d'art. - Also called cabinet wine. a dry white wine produced in Germany from fully matured grapes without the addition of extra sugar. - [New Eng.](chiefly Rhode Island and Southern Massachusetts). a milk shake made with ice cream. - [Archaic.]a small room. - [Obs.]a small cabin. - pertaining to a political cabinet: a cabinet meeting. - pertaining to a private room. - of suitable value, beauty, or size for a private room, small display case, etc.: a cabinet edition of Milton. - of, pertaining to, or used by a cabinetmaker or in cabinetmaking. - [Drafting.]designating a method of projection(cab′inet projec′tion) in which a three-dimensional object is represented by a drawing(cab′inet draw′ing) having all vertical and horizontal lines drawn to exact scale, with oblique lines reduced to about half scale so as to offset the appearance of distortion. Cf. axonometric, isometric (def. 5), oblique (def. 13). See illus. under isometric.
0.9777
FineWeb
```json [ "Cd Storage Cabinet", "Image View", "Cabinet Style Storage" ] ```
Charlotte Smith (4 May 1749 – 28 October 1806 / London) » Click to list general statistics » OR, select a date : Sonnet Lxx: On Being Cautioned Against Walking On An Headland Overlooking The Sea, Because It Was Frequented By A Lunatic Is there a solitary wretch who hies To the tall cliff, with starting pace or slow, And, measuring, views with wild and hollow eyes Its distance from the waves that chide below; read more >> - Hits: Number of visitors who read the poem on the given date. - Posts: Number of visitors who posted the poem to a friend (via e-mail) on the given date. - Dates that have "0" hits may not be shown on the list. - Dates, poems and poets that have "0" hits may not be shown on the list. Statistics are generated daily.
0.5048
FineWeb
["Poem Statistics", "Poet Information", "Reader Engagement"]
Grand Lake Fire recently conducted Ice Rescue Certification training. We had a total of 15 (10 new certifications, 5 refreshers) students from 5 different fire departments in the county. Training consisted of 6 hours of classroom and 6 hours of field day. Under the instruction of Grand Lake Fire’s certified Ice Rescue Instructor Lt. Blake Mertz, the students rotated through the training as shore support, “victim”, and rescuer. Feedback from the students was great; they all had a blast and learned a lot. We would like to congratulate the following attendees for achieving Ice Rescue Certification/refresh: Grand Lake Fire: East Grand Fire: Hot Sulphur Springs/Parshall Fire: If you have any questions, please contact Cheryl Dale, GLFPD Public Information Officer.
0.9182
FineWeb
["Ice Rescue Certification", "Fire Department Training", "Emergency Response"]
The Obama Administration promises change, and invites suggestions for their agenda. Since they’ve asked, here are the first things we think the new administration should do to restore our right to travel, and to address the issues of ID requirements and identity-based government surveillance and control of travel and movement. Some of these can be accomplished with the stroke of a pen on Inauguration Day in January, through Presidential proclamations and directives to Executive staff and agencies. Others can be ordered by the President, but will require a slightly longer process to comply with administrative notice and comment requirements for changes to (and, in many cases, withdrawal of) Federal regulations. Others will require legislation, which we urge the Presidential transition team and members of Congress to begin drafting so they can take action early in the new Congressional session. If asked, we would be available to advise and participate in this process. Finally, Senators should question nominees for Executive appointments —especially those nominated to be the new Secretary of Homeland Security and the Administrator of the TSA – about how they will address specific, important issues from the day they take office. These questions are detailed below (and also available here in PDF format). - Reaffirm Executive Order 13107 on Implementation of Human Rights Treaties, and instruct heads of agencies to ensure that it is carried out. As part of his agenda, President-Elect Obama has promised to “strengthen civil rights enforcement,” and this should include enforcement of rights guaranteed by international human rights treaties to which the U.S. is a party. In particular, President-Elect Obama should extend Executive Order 13107 to explicitly mandate consideration of international human rights treaties in Federal agency rulemakings that could implicate rights protected under those treaties — such as the freedom of movement guaranteed by Article 12 of the International Covenant on Civil and Political Rights (ICCPR). - Forbid Executive agencies from placing names on “watchlists”, blacklists used as a basis for denial or limitation of rights, except as directed by order of competent judicial authorities (e.g. through arrest warrants, injunctions, or restraining orders). In his agenda on homeland security, President-Elect Obama says that, “a comprehensive terrorist watch list … must be developed and used in a way that safeguards passengers’ privacy while ensuring the safety of air travel.” We urge President-Elect Obama to recognize, as a law-school instructor in Constitutional law and civil liberties, that passengers’ Constitutional and human rights are not limited to privacy, and that the proper protections for those rights are those provided by judicial due process. President-Elect Obama can, and should, thereby put an immediate end to the issuance of “no-fly” orders by secret administrative process, and limit any “watchlists” to those people whose movements and right to travel have been restricted by court order. - Order the Secretary of State to revert to the issuance of U.S. passports that don’t contain RFID chips. - Order the Secretary of Homeland Security to discontinue the compilation and maintenance of government records of the travels and movements of people who are not subjects of criminal investigation or court orders authorizing such surveillance, and the destruction of existing government travel dossiers as soon as this can be approved by the National Archives and Records Administration. This would include records about the routine travels of innocent people being compiled and maintained in the Automated Targeting System (ATS), Secure Flight, Border Crossing Information System (BCIS), Treasury Enforcement Communications System (TECS), Arrival-Departure Information System (ADIS), and other records systems. - Publicly commit the new Administration to respect the conditions placed by Congress on appropriations, notwithstanding the “signing statements” by the previous President that the previous Administration would consider those conditions “advisory” and non-binding. To carry out that commitment, direct the DHS and the TSA not to spend or commit funds (including not to award or extend contracts) for Secure Flight or other or other aviation passenger screeing programs, other than expenditures strictly limited to testing, until the Government Accountability Office has reported that the criteria specified in the 2005 through 2008 Homeland Security Appropriations Acts have been met. By acting promptly to withdraw these ill-conceived and hastily promulgated implementing regulations, or to replace those too far advanced in their implementation process with functionally realistic and privacy conscious alternatives, the President and Congress can head off the waste of billions of dollars more on these schemes. None of these rules would make us safer, and all of them would make us less free. - Withdraw the REAL-ID regulations. - Withdraw the Secure Flight regulations. - Withdraw the Advanced Passenger Information System (APIS) regulations. - Withdraw the Transportation Worker Identification Credential (TWIC) regulations. - Introduce and make the first legislative priority of the Obama Administration the passage of a “Human Rights Act of 2009″ to create a Federal cause of action for violation of the International Covenant on Civil and Political Rights (ICCPR), and to grant the Federal courts jurisdiction over cases arising under this Act. - Repeal the REAL-ID Act. - Unconditionally defund and prohibit the deployment or implementation of Secure Flight. Questions for nominees for the DHS and TSA: “As the nominee for Secretary of Homeland Security or Administrator of the Transportation Security Administration, … - Do you believe that individuals should have a right to travel in the USA? Why or why not? - What substantive (e.g probable cause) and procedural (e.g. due process and judicial review) standards do you believe should apply to actions by or directed by your agency, or other government agencies, that would restrict that right? - Should individuals in the USA be required to have or display government ID in order to travel by common carrier or on public rights-of-way by plane? By train? By bus? By ship or ferry? By private car? On foot? Why or why not? - Should individuals in the USA be required to obtain government permission in order to travel by common carrier or on public rights-of-way by plane? By train? By bus? By ship or ferry? By private car? On foot? Why or why not? - Should US citizens be required to have a passport and/or obtain government permission in order to leave the USA? Why or why not? - Should US citizens be required to have a passport and/or obtain government permission in order to return to the USA from abroad? Why or why not? - Should the government maintain records of the travel or movement of people who are not suspected of a crime or subject to a court order authorizing surveillance and logging of their movements? Why or why not? - Should the government mandate the collection or maintenance by travel companies of records of the travel or movement of people who are not suspected of a crime or subject to a court order authorizing surveillance and logging of their movements? Why or why not? - Should travel companies or other third parties to whom individuals are required by the government to provide personal information be free to use, sell, or “share” that information, or should it be protected by laws? Why or why not? - What do you think should be done with existing government files of travel records about innocent people? That’s our agenda for the Obama Administration. If you agree, we encourage to let the President-Elect and the transition team know, as well as your Senators and Representative in the new Congress.
0.6085
FineWeb
["Travel Rights", "Identification Requirements", "Government Surveillance"]
Animal Welfare Policy CanCog Technologies conducts non-invasive research that does not jeopardize the health of our animals. As a company we are committed to providing a comfortable and safe environment that meets or exceeds regulatory standards. We ensure that high quality veterinary care is provided for all animals. The goal of our studies is to improve the health and welfare of animals and people. - Only non-invasive research is conducted by CanCog Technologies. This research may involve collection of blood, urine or fecal samples and non invasive imaging techniques: no other invasive procedures are No research is conducted that requires euthanasia of the subjects. All research studies embrace the three Rs of research: replacement with non animal alternatives whenever possible, reduction in the number of animals used and refinement of the study to minimise the potential for distress to the animals involved. Our research facilities strive to meet or exceed the requirements of the Ontario Animals for Research Act and the guidelines of the Canadian Council on Animal Care. All animals have environmental enrichment with toys, group housing and socialisation with people. - Our research is conducted on healthy animals or animals with naturally occurring disease. We do not induce diseases in our animals. A colony of older dogs and cats is maintained specifically for the investigation of aging and associated naturally occurring diseases. Appropriate veterinary care is provided for all animals. Each animal's quality of life is reviewed on at least a yearly basis. Humane euthanasia is provided when quality of life declines and cannot easily be treated. Some of our research involves clinical trials with pets living at home. These pets are always provided with appropriate veterinary care as required.
0.609
FineWeb
["Animal Welfare", "Research Policies", "Veterinary Care"]
Now that we are experts in the bishop vs. knight endgames it is time to move on to the endgames that feature the knight's rivals - the bishops. We shall have several articles exploring different ideas in bishop endgames such as exchange, sacrifice, passed-pawns, zugzwang, stalemate, and others. Today is the first article, where I did not want to concentrate on a specific topic but give a general idea of what kind of positions we will look at in the future articles. Here, in one endgame one can see bishop sacrifice and deflection, zugzwang and the advantages/disadvantages of having the pawns on the same color squares as the bishop. There is no specific topic that stands out but rather a mix of different ideas. I thought of doing this to show that even the most boring looking endgame has so much hidden potential and has so many hidden ideas waiting to be discovered. Let us move on to the three examples. White's position is better because he has more space and because his pawns are on dark squares and cannot be attacked by the opponent's bishop. However, it looks like black has built up a defensive line: the king is shielding the queen-side while the bishop protects the king-side. There are limited options for white's improvement of the position. There are pawn breaks b5 and f5 and there is a possible bishop exchange after white transfers the bishop to d3. I don't see more reasonable plans for white. What is the goal of these plans? The goal is to improve the position, which can translate into better piece placement, material advantage or some other type of advantage. And the last type is the creation of a passed pawn. In the following analysis we will look at all three plans and evaluate the end-positions to determine whether white achieved what they wanted. The next example shows the disadvantage of having the pawns on the same color as the bishops. The pawns can get vulnerable - they need to be defended by the bishop or the king. And if there are also pawns present on the other side of the board the defending side might end up with too thin a line of defense. Here white sacrifices a passed pawn temporarily to get to black's kingside after which the winning strategy is just a matter of technique. The example also shows that the defending side can sacrifice the bishop (a line to move three) but collect all the opponent's pawns, since they cannot be defended with the bishop. So there are advantages as well as disadvantages to having the pawns on the same color as the bishop - one just has to have an open mind to unusual ideas such as the one given for the variation to move 3. The theme of zugzwang is popular in all kinds of endgames. In the following example it reoccurred several times. Black used zugzwang to break down white's defense, although he had to be careful as with one misstep white could build a fortress. Black's king is too far away from the action but it is needed where it is now to stop the h-pawn. Similarly, the white king defends against the promotion of the a and b-pawns. Having an extra pawn greatly helps black and in the end he manages to realize it. We went over three bishop endgame examples. In the first example white found a strong pawn break that created a passed pawn and the game was decided more or less on the spot. The highlight of the second example is the bishop sacrifice where black stands no worse. The third example showed how important the idea of zugzwang is. After having this introductory article you should be familiar with what to expect from bishop endgames. In the next article we will look closer at exchanges in bishop endgames.
0.5778
FineWeb
["Bishop Endgames", "Zugzwang", "Pawn Structure"]