Ep#30 HowTo Apply FullFashion Nylons Blind Folded TMJS


NAZ_0002 - Copy


001 - Copy tmjs 30 pic
Shop Amazon – Give the Gift of Amazon Prime 6a00d8341c8c6253ef011571db2643970b-250wi 182.3L 1920s-isis-full-fashioned-stockings-box-620x826 191355337512_1 download (1) download (2) download (3) download (4) download il_340x270.12739000 il_570xN.729531280_qxgf images (1) images (2) images (3) images (4) images (5) images (6) images (7) images (8) images (9) images (10) images (11) images (12) Shop Amazon – GoPro HERO4 Session for $299.99 images (13) images (14) Shop Amazon – Premium Home Audio Shop

images (15) images (16) images

Introducing Amazon Handmade

paperwinki tiki 001 - Copy

TheMaryJaneStyle How To Walk in The Highest of HighHeels Ep#29

TheMaryJaneStyle How To Walk in The Highest of High Heels Ep#29

1280px-Woman's_yellow_silk_shoes_1760s
High-heeled footwear is footwear that raises the heel of the wearer’s foot significantly higher than the toes. When both the heel and the toes are raised equal amounts,
as in a platform shoe, it is technically not considered to be a high heel; however, there are also high-heeled platform shoes. High heels tend to give the aesthetic illusion
of longer, more slender legs. High heels come in a wide variety of styles, and the heels are found in many different shapes, including stiletto, pump (court shoe), block,
tapered, blade, and wedge.

 

According to high-fashion shoe websites like Jimmy Choo and Gucci, a “low heel” is considered less than 2.5 inches (6.4 centimeters), while heels between 2.5 and 3.5 inches
(6.4 and 8.9 cm) are considered “mid heels”, and anything over that is considered a “high heel”. The apparel industry would appear to take a simpler view; the term
“high heels” covers heels ranging from 2 to 5 inches (5.1 to 12.7 cm) or more. Extremely high-heeled shoes, such as those exceeding 6 inches (15 cm), strictly speaking,
are no longer considered apparel but rather something akin to “jewelry for the feet”. They are worn for display or the enjoyment of the wearer.

Although high heels are now usually worn only by girls and women, there are shoe designs worn by both genders that have elevated heels, including cowboy boots
and Cuban heels. In previous ages, men also wore high heels.

In the ninth century, Persian horseback warriors wore an extended heel made up for keeping feet from sliding out of stirrups. This also kept riders still when they needed
to stand up and shoot arrows.

20151205_170144
Stiletto heel

A shoe with a stiletto heel
A stiletto heel is a long, thin, high heel found on some boots and shoes, usually for women.

It is named after the stiletto dagger, the phrase being first recorded in the early 1930s. Stiletto heels may vary in length from 2.5 centimeters (1 inch) to 25 cm
(10 inches) or more if a platform sole is used, and are sometimes defined as having a diameter at the ground of less than 1 cm (slightly less than half an inch).
Stiletto-style heels 5 cm (2.0 in) or shorter are called kitten heels.

Not all high slim heels merit the description stiletto. The extremely slender original Italian-style stiletto heels of the late 1950s and very early 1960s were no more
than 5 mm (0.20 in) in diameter for much of their length, although the heel sometimes flared out a little at the top-piece (tip). After their demise in the mid-late 1960s,
such slender heels were difficult to find until recently due to changes in the way heels were mass-produced. A real stiletto heel has a stem of solid steel or alloy.
The more usual method of mass-producing high shoe heels, i.e. molded plastic with an internal metal tube for reinforcement, does not achieve the true stiletto shape.

A pair of shoes with 12 cm stiletto heels
Relatively thin high heels were certainly around in the late 19th century, as numerous fetish drawings attest. Firm photographic evidence exists in the form of photographs
of Parisian singer Mistinguett from the 1940s. These shoes were designed by Andre Perugia, who began designing shoes in 1906. It seems unlikely that he invented the stiletto,
but he is probably the first firmly documented designer of the high, slim heel. The word stiletto is derived from stiletto, which is a long thin blade, similar in profile
to the heel of the shoe. Its usage in footwear first appeared in print in the New Statesman magazine in 1959: “She came …forward, her walk made lopsided by the absence of
one heel of the stilettos”.

20151205_142212
High heel shoes were worn by men and women courtiers. The stiletto heel came with the advent of technology using a supporting metal shaft or stem embedded into the heel,
instead of wood or other, weaker materials that required a wide heel. This revival of the opulent heel style can be attributed to the designer Roger Vivier and such designs
became very popular in the 1950s.

 

As time went on, stiletto heels would become known more for their erotic nature than for their ability to make height. Stiletto heels are a common fetish item. As a fashion
item, their popularity has changed over time. After an initial wave of popularity in the 1950s, they reached their most refined shape in the early 1960s, when the toes of
the shoes which bore them became as slender and elongated as the stiletto heels themselves. As a result of the overall sharpness of outline, it was customary for women to
refer to the whole shoe as a “stiletto”, not just the heel, via synecdoche (pars pro toto). Although they officially faded from the scene after the Beatle era began, their
popularity continued at street level, and women stubbornly refused to give them up even after they could no longer readily find them in the mainstream shops. A version of
the stiletto heel was reintroduced in 1974 by Manolo Blahnik, who dubbed his “new” heel the “Needle”. Similar heels were stocked at the big Biba store in London, by Russell
& Bromley and by smaller boutiques. Old, unsold stocks of pointed-toe stilettos and contemporary efforts to replicate them (lacking the true stiletto heel because of changes
in the way heels were by then being mass-produced) were sold in street fashion markets and became popular with punks and with other fashion “tribes” of the late 1970s until
supplies of the inspirational original styles dwindled in the early 1980s. Subsequently, round-toe shoes with slightly thicker (sometimes cone-shaped) semi-stiletto heels,
often very high in an attempt to convey slenderness were frequently worn at the office with wide-shouldered power suits. The style survived through much of the 1980s but
almost completely disappeared during the 1990s, when professional and college-age women took to wearing shoes with thick, block heels. The slender stiletto heel staged a
major comeback after 2000 when young women adopted the style for dressing up office wear or adding a feminine touch to casual wear, like jeans.

 

Stiletto heels are particularly associated with the image of the femme fatale. They are often considered to be a seductive item of clothing, and often feature in
popular culture in this context.

IMG_5487
History
Medieval Europeans wore wooden-soled paten shoes, which were ancestors to contemporary high heels. Elizabeth Semmelhack, curator at Toronto’s Bata Shoe Museum,
traces the high heel to Persian horse riders in the Near East who used high heels for functionality, because they helped hold the rider’s foot in stirrups.
She states that this footwear is depicted on a 9th-century ceramic bowl from Persia.

 

It is sometimes suggested that raised heels were a response to the problem of the rider’s foot slipping forward in stirrups while riding.
The “rider’s heel”, approximately 1 1⁄2 inches (3.8 cm) high, appeared in Europe around 1600. The leading edge was canted forward to help grip the stirrup, and the trailing
edge was canted forward to prevent the elongated heel from catching on underbrush or rock while backing up, such as in on-foot combat. These features are evident today
in riding boots, notably cowboy boots.

Ancient Egypt

Early depictions of high heels could be seen on ancient Egyptian murals, dating back to 3500 BC. These murals would depict Egyptian nobilities wearing heels to set them
apart from the lower class, who would normally go barefoot. Heeled shoes were worn by both men and women, and most commonly for ceremonial purposes. However, high heels also
served a practical purpose for Egyptian butchers who wore them in order to walk over the bloodied bodies of animal carcasses. During Egyptian times, heels were leather
pieces that were held together by lacing to form the symbol of “Ankh”, signifying life.

Ancient Greece and Rome

Platform sandals called “kothorni” or “buskins” were shoes with high wooden cork soles worn during ancient Greek and Roman era. They were particularly popular among the
actors who would wear them to differentiate the social classes and importance of each character. In ancient Rome, where sex trade was legal, high heels were used to identify
those within the trade to potential clients and high heels became associated with prostitution.

71-n
Contemporary scene

Since the Second World War, high heels have fallen in and out of popular fashion trend several times, most notably in the late 1990s, when lower heels and even flats
predominated[citation needed]. Lower heels were preferred during the late 1960s and early 1970s as well, but higher heels returned in the late 1980s and early 1990s.
The shape of the fashionable heel has also changed from block (1970s) to tapered (1990s), and stiletto (1950s, early 1960s, 1980s, and post-2000).

Today, high heels are typically worn, with heights varying from a kitten heel of 1.5 inches (3.8 cm) to a stiletto heel (or spike heel) of 5 inches (13 cm) or more.
Extremely high-heeled shoes, such as those higher than 6 inches (15 cm), are normally worn only for aesthetic reasons and are not considered practical. Court shoes
are conservative styles and often used for work and formal occasions, while more adventurous styles are common for evening wear and dancing. High heels have seen
significant controversy in the medical field lately, with many podiatrists seeing patients whose severe foot problems have been caused almost exclusively by high-heel wear.

The wedge heel is informally another style of the heel, where the heel is in a wedge form and continues all the way to the toe of the shoe.

 

BW  Stilletos fbj

Negative effects

The case against wearing high heels is based almost exclusively on health and practicality reasons, including that they:
can cause foot and tendon pain;
increase the likelihood of sprains and fractures;
make calves look more rigid and sinewy;
can create foot deformities, including hammer toes and bunions;
can cause an unsteady gait;
can shorten the wearer’s stride.
can render the wearer unable to run;
can exacerbate lower back pain;
alter forces at the knee so as to predispose the wearer to degenerative changes in the knee joint;
can result after frequent wearing in a higher incidence of degenerative joint disease of the knees. This is because they cause a decrease in the normal rotation of the foot, which puts more rotation stress on the knee.
can cause damage to soft floors if they are thin or metal-tipped.
Dress and Stilletos On Full Fashion Nylons fbj

Nylon Feet fbj
Positive effects

 

The case for wearing high heels is based almost exclusively on aesthetic reasons, including that they:
change the angle of the foot with respect to the lower leg, which accentuates the appearance of calves;
change the wearer’s posture, requiring a more upright carriage and altering the gait in what is considered a seductive fashion;
make the wearer appear taller;
make the legs appear longer;
make the foot appear smaller;
make the toes appear shorter;
make the arches of the feet higher and better defined;
according to a single line of research, they may improve the muscle tone of some women’s pelvic floor, thus possibly reducing female incontinence,
although these results have been disputed.
offer practical benefits for people of short stature in terms of improving access and using items, e.g. sitting upright with feet on floor instead of suspended,
reaching items on shelves, etc.
Nylons off fbj
During the 16th century, European royalty, such as Catherine de Medici and Mary I of England, started wearing high-heeled shoes to make them look taller or larger than life.
By 1580, men also wore them, and a person with authority or wealth was often referred to as “well-heeled”.

In modern society, high-heeled shoes are a part of women’s fashion, perhaps more as a sexual prop. High heels force the body to tilt, emphasizing the buttocks and breasts.
They also emphasize the role of feet in sexuality, and the act of putting on stockings or high heels is often seen as an erotic act. This desire to look sexy and erotic
continues to drive women to wear high-heeled shoes, despite causing significant pain in the ball of the foot, or bunions or corns, or hammer toe. A survey conducted by the
American Podiatric Medical Association showed some 42% of women admitted that they would wear a shoe they liked even if it gave them discomfort.

 

 

 

Types of high heels

Types of heels found on high-heeled footwear include:
cone: a round heel that is broad where it meets the sole of the shoe and noticeably narrower at the point of contact with the ground
kitten: a short, slim heel with maximum height under 2 inches and diameter of no more than 0.4 inch at the point of contact with the ground
prism: three flat sides that form a triangle at the point of contact with the ground
puppy: thick square block heel approximately 2 inches in diameter and height
spool or louis: broad where it meets the sole and at the point of contact with the ground; noticeably narrower at the midpoint between the two
stiletto: a tall, slim heel with minimum height of 2 inches and diameter of no more than 0.4 inch at the point of contact with the ground
wedge: occupies the entire space under the arch and heel portions of the foot.
arch: minimum of 7″ and only worn by teens

 

Men and heels

The Vision of Saint Eustace, Pisanello, 1438–1442. Rider wearing high heels.
Elizabeth Semmelhack, curator for the Bata Shoe Museum, traces the high heel to male horse-riding warriors in the Middle East who used high heels for functionality,
because they help hold the rider’s foot in stirrups. She states that the earliest high heel she has seen is depicted on a 9th-century AD ceramic bowl from Persia.

Since the late 18th century, men’s shoes have featured lower heels than most women’s shoes. Some attribute it to Napoleon who disliked high heels; others to the
general trend of minimizing non-functional items in men’s clothing. Cowboy boots remain a notable exception, and they continue to be made with a taller riding heel.
The two-inch Cuban heel featured in many styles of men’s boot derives its heritage from certain Latino roots, most notably various forms of Spanish and Latin American dance,
including Flamenco, as most recently evidenced by Joaquín Cortés. Cuban heels were first widely popularized, however, by Beatle boots, as worn by the English rock group
The Beatles during their introduction to the United States. Some say this saw the re-introduction of higher-heeled footwear for men in the 1960s and 1970s
(in Saturday Night Fever, John Travolta’s character wears a Cuban heel in the opening sequence). The singer Prince is known to wear high heels, as well as Elton John.
Bands such as Mötley Crüe and Sigue Sigue Sputnik predominantly wore high heels during the 1980s. Current well-known male heel wearers include Prince, Justin Tranter,
lead singer of Semi Precious Weapons, and Bill Kaulitz, the lead singer of Tokio Hotel. Popular R&B singer Miguel was wearing his trademark Cuban heels during the “legdrop”
incident at the 2013 Billboard Music Awards.Winklepicker boots often feature a Cuban heel.

Accessories

The stiletto of certain kinds of high heels can damage some types of floors. Such damage can be prevented by heel protectors, also called covers, guards, or taps,
which fit over the stiletto tips to keep them from direct, marring contact with delicate surfaces, such as linoleum (rotogravure) or urethane-varnished wooden floors.
Heel protectors are widely used in ballroom dancing, as such dances are often held on wooden flooring. The bottom of most heels usually has a plastic or metal heel tip
that wears away with use and can be easily replaced. Dress heels (high-heeled shoes with elaborate decoration) are worn for formal occasions.

 

Other uses for specialized high heel protectors make it feasible to walk on grass or soft earth, but not mud, sand, and water, during outdoor events, removing the need to
have specialized carpeting or flooring on an outdoor or soft surface. Certain heel protectors also improve the balance of the shoe and reduce the strain that certain
high heeled or stiletto shoes can place on the foot.

Health effects
Foot and tendon problems

High-heeled shoes slant the foot forward and down while bending the toes up. The more the feet are forced into this position, the more it may cause the gastrocnemius muscle
(part of the calf muscle) to shorten. This may cause problems when the wearer chooses lower heels or flat-soled shoes. When the foot slants forward, a much greater weight
is transferred to the ball of the foot and the toes, increasing the likelihood of damage to the underlying soft tissue that supports the foot. In many shoes, style dictates
function, either compressing the toes or forcing them together, possibly resulting in blisters, corns, hammer toes, bunions (hallux valgus), Morton’s neuroma, plantar
fasciitis and many other medical conditions, most of which are permanent and require surgery to alleviate the pain. High heels, because they tip the foot forward,
put pressure on the lower back by making the rump push outwards, crushing the lower back vertebrae and contracting the muscles of the lower back.

 

If the wearer believes it is not possible to avoid high heels altogether, it is suggested that the wearer spend at least a third of the time they spend on their feet
in contour-supporting “flat” shoes (such as exercise sandals), or well-cushioned sneaker-type shoes, saving high heels for special occasions; or if it is a necessity in
their job, such as a lawyer, it is recommended that they limit the height of the heel that they wear, or, if they are in court, remain seated as much as possible to avoid
damage to the feet. It is also recommended to wear a belt if possible with heels, because the elevation of the foot and extension of the leg can cause pants to become looser
than wanted. In the winter time, one could also use seat warmers with heels to relax and loosen muscles all over the body.

One of the most critical problems of high-heeled shoe design involves a properly constructed toe-box. Improper construction here can cause the most damage to one’s foot.
Toe-boxes that are too narrow force the toes to be crammed too close together. Ensuring that room exists for the toes to assume a normal separation so that high-heel wear
remains an option rather than a debilitating practice is an important issue in improving the wear ability of high-heeled fashion shoes.

Wide heels do not necessarily offer more stability, and any raised heel with too much width, such as found in “blade-heeled” or “block-heeled” shoes, induces unhealthy
side-to-side torque to the ankles with every step, stressing them unnecessarily, while creating additional impact on the balls of the feet. Thus, the best design for a
high heel is one with a narrower width, where the heel is closer to the front, more solidly under the ankle, where the toe box provides room enough for the toes, and where
forward movement of the foot in the shoe is kept in check by material snug across the instep, rather than by the toes being rammed forward and jamming together in the
toe box or crushed into the front of the toe box.

Pelvic floor muscle tone

A 2008 study by Cerruto et al. reported results that suggest that wearing high heels may improve the muscle tone of a woman’s pelvic floor. The authors speculated that this
could have a beneficial effect on female stress urinary incontinence.

 

Feminist attitudes

The high heel has been a central battleground of sexual politics ever since the emergence of the women’s liberation movement of the 1970s. Many second-wave feminists
rejected what they regarded as constricting standards of female beauty, created for the subordination and objectifying of women and self-perpetuated by reproductive
competition and women’s own aesthetics.

The British-American journalist Hadley Freeman wrote, “For me, high heels are just fancy foot binding with a three-figure price tag”, although she supported the
freedom to choose what to wear and stated that “one person’s embrace of their sexuality is another person’s patriarchal oppression.”

 

 

Tall Stilletos fbj

TheMaryJaneStyle Ep.27 Ch.1-3 “History of Cameras”

tmjs ch1
ch 1 history

The history of the camera can be traced much further back than the introduction of photography. Cameras evolved from the camera obscura, and continued to change through many generations of photographic technology, including daguerreotypes, calotypes, dry plates, film, and digital cameras.

The camera obscura
A camera obscura (Latin: “dark chamber”) is an optical device that led to photography and the photographic camera. The device consists of a box or room with a hole in one side. Light from an external scene passes through the hole and strikes a surface inside, where it is reproduced, rotated 180 degrees (thus upside-down), but with color and perspective preserved. The image can be projected onto paper, and can then be traced to produce a highly accurate representation. The largest camera obscura in the world is on Constitution Hill in Aberystwyth, Wales.[1]

Using mirrors, as in an 18th-century overhead version, it is possible to project a right-side-up image. Another more portable type is a box with an angled mirror projecting onto tracing paper placed on the glass top, the image being upright as viewed from the back.

As the pinhole is made smaller, the image gets sharper, but the projected image becomes dimmer. With too small a pinhole, however, the sharpness worsens, due to diffraction. Most practical camera obscuras use a lens rather than a pinhole (as in a pinhole camera) because it allows a larger aperture, giving a usable brightness while maintaining focus.
An artist using an 18th-century camera obscura to trace an image
Photographic cameras were a development of the camera obscura, a device possibly dating back to the ancient Chinese[1] and ancient Greeks,[2][3] which uses a pinhole or lens to project an image of the scene outside upside-down onto a viewing surface.

An Arab physicist, Ibn al-Haytham, published his Book of Optics in 1021 AD. He created the first pinhole camera after observing how light traveled through a window shutter. Ibn al-Haytham realized that smaller holes would create sharper images. Ibn al-Haytham is also credited with inventing the first camera obscura.[4]

On 24 January 1544 mathematician and instrument maker Reiners Gemma Frisius of Leuven University used one to watch a solar eclipse, publishing a diagram of his method in De Radio Astronimica et Geometrico in the following year.[5] In 1558 Giovanni Batista della Porta was the first to recommend the method as an aid to drawing.[6]
Early fixed images
The first partially successful photograph of a camera image was made in approximately 1816 by Nicéphore Niépce,[7][8] using a very small camera of his own making and a piece of paper coated with silver chloride, which darkened where it was exposed to light. No means of removing the remaining unaffected silver chloride was known to Niépce, so the photograph was not permanent, eventually becoming entirely darkened by the overall exposure to light necessary for viewing it. In the mid-1820s, Niépce used a sliding wooden box camera made by Parisian opticians Charles and Vincent Chevalier to experiment with photography on surfaces thinly coated with Bitumen of Judea.[9] The bitumen slowly hardened in the brightest areas of the image. The unhardened bitumen was then dissolved away. One of those photographs has survived.
Before the invention of photographic processes there was no way to preserve the images produced by these cameras apart from manually tracing them. The earliest cameras were room-sized, with space for one or more people inside; these gradually evolved into more and more compact models such as that by Niépce’s time portable handheld cameras suitable for photography were readily available. The first camera that was small and portable enough to be practical for photography was envisioned by Johann Zahn in 1685, though it would be almost 150 years before such an application was possible.
The history of photography has roots in remote antiquity with the discovery of the principle of the camera obscura and the observation that some substances are visibly altered by exposure to light. As far as is known, nobody thought of bringing these two phenomena together to capture camera images in permanent form until around 1800, when Thomas Wedgwood made the first reliably documented although unsuccessful attempt. In the mid-1820s, Nicéphore Niépce succeeded, but several days of exposure in the camera were required and the earliest results were very crude. Niépce’s associate Louis Daguerre went on to develop the daguerreotype process, the first publicly announced photographic process, which required only minutes of exposure in the camera and produced clear, finely detailed results. It was commercially introduced in 1839, a date generally accepted as the birth year of practical photography.[1]
Daguerreotypes and calotypes
After Niépce’s death in 1833, his partner Louis Daguerre continued to experiment and by 1837 had created the first practical photographic process, which he named the daguerreotype and publicly unveiled in 1839.[10] Daguerre treated a silver-plated sheet of copper with iodine vapor to give it a coating of light-sensitive silver iodide. After exposure in the camera, the image was developed by mercury vapor and fixed with a strong solution of ordinary salt (sodium chloride). Henry Fox Talbot perfected a different process, the calotype, in 1840. As commercialized, both processes used very simple cameras consisting of two nested boxes. The rear box had a removable ground glass screen and could slide in and out to adjust the focus. After focusing, the ground glass was replaced with a light-tight holder containing the sensitized plate or paper and the lens was capped. Then the photographer opened the front cover of the holder, uncapped the lens, and counted off as many seconds—or minutes—as the lighting conditions seemed to require before replacing the cap and closing the holder. Despite this mechanical simplicity, high-quality achromatic lenses were standard.[11]
Daguerreotype

Daguerreotype of Louis Daguerre in 1844 by Jean-Baptiste Sabatier-Blot
The daguerreotype (/dəˈɡɛrɵtaɪp/; French: daguerréotype) process, or daguerreotypy, was the first publicly announced photographic process, and for nearly twenty years, it was the one most commonly used. It was invented by Louis-Jaques-Mandé Daguerre and introduced worldwide in 1839.[1][2][3] By 1860, new processes which were less expensive and produced more easily viewed images had almost completely replaced it. During the past few decades, there has been a small-scale revival of daguerreotypy among photographers interested in making artistic use of early photographic processes.

To make a daguerreotype, the daguerreotypist polished a sheet of silver-plated copper to a mirror finish; treated it with fumes that made its surface light-sensitive; exposed it in a camera for as long as was judged to be necessary, which could be as little as a few seconds for brightly sunlit subjects or much longer with less intense lighting; made the resulting latent image on it visible by fuming it with mercury vapor; removed its sensitivity to light by liquid chemical treatment; rinsed and dried it; then sealed the easily marred result behind glass in a protective enclosure.

Viewing a daguerreotype is unlike looking at any other type of photograph. The image does not sit on the surface of the metal, but appears to be floating in space, and the illusion of reality, especially with examples that are sharp and well exposed is unique to the process.

The image is on a mirror-like silver surface, normally kept under glass, and will appear either positive or negative, depending on the angle at which it is viewed, how it is lit and whether a light or dark background is being reflected in the metal. The darkest areas of the image are simply bare silver; lighter areas have a microscopically fine light-scattering texture. The surface is very delicate, and even the lightest wiping can permanently scuff it. Some tarnish around the edges is normal, and any treatment to remove it should be done only by a specialized restorer.

Several types of antique photographs, most often ambrotypes and tintypes, but sometimes even old prints on paper, are very commonly misidentified as daguerreotypes, especially if they are in the small, ornamented cases in which daguerreotypes made in the US and UK were usually housed. The name “daguerreotype” correctly refers only to one very specific image type and medium, the product of a process that was in wide use only from the early 1840s to the late 1850s.

History
Since the Renaissance era, artists and inventors had searched for a mechanical method of capturing visual scenes.[4] Previously, using the camera obscura, artists would manually trace what they saw, or use the optical image in the camera as a basis for solving the problems of perspective and parallax, and deciding color values. The camera obscura’s optical reduction of a real scene in three-dimensional space to a flat rendition in two dimensions influenced western art, so that at one point, it was thought that images based on optical geometry (perspective) belonged to a more advanced civilization. Later, with the advent of Modernism, the absence of perspective in oriental art from China, Japan and in Persian miniatures was revalued.

In the early seventeenth century, the Italian physician and chemist Angelo Sala wrote that powdered silver nitrate was blackened by the sun, but did not find any practical application of the phenomenon.

Previous discoveries of photosensitive methods and substances—including silver nitrate by Albertus Magnus in the 13th century,[5] a silver and chalk mixture by Johann Heinrich Schulze in 1724,[6][7] and Joseph Niépce’s bitumen-based heliography in 1822 contributed to development of the daguerreotype.[4][8]

The first reliably documented attempt to capture the image formed in a camera obscura was made by Thomas Wedgwood as early as the 1790s, but according to an 1802 account of his work by Sir Humphry Davy:

“The images formed by means of a camera obscura have been found too faint to produce, in any moderate time, an effect upon the nitrate of silver. To copy these images was the first object of Mr. Wedgwood in his researches on the subject, and for this purpose he first used the nitrate of silver, which was mentioned to him by a friend, as a substance very sensible to the influence of light; but all his numerous experiments as to their primary end proved unsuccessful.”[9]

Development in France
In 1829 French artist and chemist Louis Jacques-Mandé Daguerre, contributing a cutting edge camera design, partnered with Niépce, a leader in photochemistry, to further develop their technologies.[4] The two men came into contact through their optician, Chevalier, who supplied lenses for their camera obscuras.

Niépce’s aim originally had been to find a method to reproduce prints and drawings for lithography. He had started out experimenting with light sensitive materials and had made a contact print from a drawing and then went on to successfully make the first photomechanical record of an image in a camera obscura—the world’s first photograph. Niépce’s method was to coat a pewter plate with bitumen of Judea (asphalt) and the action of the light differentially hardened the bitumen. The plate was washed with a mixture of oil of lavender and turpentine leaving a relief image. Niépce called his process heliography and the exposure for the first successful photograph was eight hours.

Early experiments required hours of exposure in the camera to produce visible results. Modern photo-historians consider the stories of Daguerre discovering mercury development by accident because of a bowl of mercury left in a cupboard, or, alternatively, a broken thermometer to be spurious.[10] However, there is another story of a fortunate accident, related by Louis Figuier of a silver spoon lying on an iodized silver plate which left its design on the plate by light perfectly.[11] Noticing this, Daguerre wrote to Niépce on 21 May 1831 suggesting the use of iodized silver plates as a means of obtaining light images in the camera. Letters from Niépce to Daguerre dated 24 June and 8 November 1831, show that Niépce was unsuccessful in obtaining satisfactory results following Daguerre’s suggestion, although he had produced a negative on an iodized silver plate in the camera. Niépce’s letters to Daguerre dated 29 January and 3 March 1832 show that the use of iodized silver plates was due to Daguerre and not Niépce.[12]

Jean-Baptiste Dumas, who was president of the National Society for the Encouragement of Science[13] and a chemist, put his laboratory at Daguerre’s disposal. According to Austrian chemist Josef Maria Eder, Daguerre was not versed in chemistry and it was Dumas who suggested Daguerre use sodium hyposulfite, discovered by Herschel in 1819, as a fixer to dissolve the unexposed silver salts.[7][12]

First mention in print (1835) and public announcement (1839)
At the end of a review of one of Daguerre’s Diorama spectacles in the Journal des artistes on 27 September 1835.[14] a Diorama painting of a landslide that occurred in “La Vallée de Goldau” a paragraph tacked on to the end of the review made passing mention of rumour that was going around the Paris studios of Daguerre’s attempts to make a visual record on metal plates of the fleeting image produced by the camera obscura:

“It is said that Daguerre has found the means to collect, on a plate prepared by him, the image produced by the camera obscura, in such a way that a portrait, a landscape, or any view, projected upon this plate by the ordinary camera obscura, leaves an imprint in light and shade there, and thus presents the most perfect of all drawings … a preparation put over this image preserves it for an indefinite time … the physical sciences have perhaps never presented a marvel comparable to this one.”[15]

A further clue to fixing the date of invention of the process is that when the Paris correspondent of the London periodical The Athenaeum reported the public announcement of the daguerreotype in 1839, he mentioned that the daguerreotypes now being produced were considerably better than the ones he had seen “four years earlier”.

François Arago announced the daguerreotype process at a joint meeting of the French Academy of Sciences and the Académie des Beaux-Arts on 9 January 1839. Daguerre was present, but complained of a sore throat. Later that year William Fox Talbot announced his silver chloride “sensitive paper” process.[16] Together, these announcements cause commentators to choose the 1839 as the year photography was born, or made public, although of course Daguerre had been producing daguerreotypes since 1835 and kept the process secret. [17]

Daguerre and Niépce had together signed an agreement in which remuneration for the invention would be paid for by subscription. However, the campaign they launched to finance the invention failed. François Arago, whose views on the system of patenting inventions can be gathered from speeches he made later in the House of Deputies (he apparently thought the English patent system had advantages over the French one).

Daguerre did not patent and profit from his invention in the usual way. Instead, it was arranged that the French government would acquire the rights in exchange for a lifetime pension. The government would then present the daguerreotype process “free to the world” as a gift, which it did on 19 August 1839. However, five days previously to this, Miles Berry, a patent agent acting on Daguerre’s behalf filed for patent No. 8194 of 1839: “A New or Improved Method of Obtaining the Spontaneous Reproduction of all the Images Received in the Focus of the Camera Obscura.” The patent applied to “England, Wales, and the town of Berwick-upon-Tweed, and in all her Majesty’s Colonies and Plantations abroad.”[18][19] This was the usual wording of English patent specifications before 1852. It was only after the 1852 Act, which unified the patent systems of England, Ireland and Scotland, that a single patent protection was automatically extended to the whole of the British Isles, including the Channel Isles and the Isle of Man. Richard Beard bought the patent rights from Miles Berry, and also obtained a Scottish patent, which he apparently did not enforce. The United Kingdom and the “Colonies and Plantations abroad” therefore became the only places where a license was legally required to make and sell daguerreotypes.[19][20]

Much of Daguerre’s early work was destroyed when his home and studio caught fire on 8 March 1839, while the painter Samuel Morse was visiting from the US.[21][page needed] Malcolm Daniel points out that “fewer than twenty-five securely attributed photographs by Daguerre survive—a mere handful of still lifes, Parisian views, and portraits from the dawn of photography.”[22]

Calotype or talbotype is an early photographic process introduced in 1841 by William Henry Fox Talbot,[1] using paper[2] coated with silver iodide. The term calotype comes from the Greek καλός (kalos), “beautiful”, and τύπος (tupos), “impression”.

Late 19th century studio camera
Dry plates
Collodion dry plates had been available since 1855, thanks to the work of Désiré van Monckhoven, but it was not until the invention of the gelatin dry plate in 1871 by Richard Leach Maddox that the wet plate process could be rivaled in quality and speed. The 1878 discovery that heat-ripening a gelatin emulsion greatly increased its sensitivity finally made so-called “instantaneous” snapshot exposures practical. For the first time, a tripod or other support was no longer an absolute necessity. With daylight and a fast plate or film, a small camera could be hand-held while taking the picture. The ranks of amateur photographers swelled and informal “candid” portraits became popular. There was a proliferation of camera designs, from single- and twin-lens reflexes to large and bulky field cameras, simple box cameras, and even “detective cameras” disguised as pocket watches, hats, or other objects.

The short exposure times that made candid photography possible also necessitated another innovation, the mechanical shutter. The very first shutters were separate accessories, though built-in shutters were common by the end of the 19th century.[11]

Kodak and the birth of film

Kodak No. 2 Brownie box camera, circa 1910
The use of photographic film was pioneered by George Eastman, who started manufacturing paper film in 1885 before switching to celluloid in 1889. His first camera, which he called the “Kodak,” was first offered for sale in 1888. It was a very simple box camera with a fixed-focus lens and single shutter speed, which along with its relatively low price appealed to the average consumer. The Kodak came pre-loaded with enough film for 100 exposures and needed to be sent back to the factory for processing and reloading when the roll was finished. By the end of the 19th century Eastman had expanded his lineup to several models including both box and folding cameras.

In 1900, Eastman took mass-market photography one step further with the Brownie, a simple and very inexpensive box camera that introduced the concept of the snapshot. The Brownie was extremely popular and various models remained on sale until the 1960s.

Film also allowed the movie camera to develop from an expensive toy to a practical commercial tool.

Despite the advances in low-cost photography made possible by Eastman, plate cameras still offered higher-quality prints and remained popular well into the 20th century. To compete with rollfilm cameras, which offered a larger number of exposures per loading, many inexpensive plate cameras from this era were equipped with magazines to hold several plates at once. Special backs for plate cameras allowing them to use film packs or rollfilm were also available, as were backs that enabled rollfilm cameras to use plates.

Except for a few special types such as Schmidt cameras, most professional astrographs continued to use plates until the end of the 20th century when electronic photography replaced them.
The metal-based daguerreotype process soon had some competition from the paper-based calotype negative and salt print processes invented by Henry Fox Talbot. Subsequent innovations reduced the required camera exposure time from minutes to seconds and eventually to a small fraction of a second; introduced new photographic media which were more economical, sensitive or convenient, including roll films for casual use by amateurs; and made it possible to take pictures in natural color as well as in black-and-white.

The commercial introduction of computer-based electronic digital cameras in the 1990s soon revolutionized photography. During the first decade of the 21st century, traditional film-based photochemical methods were increasingly marginalized as the practical advantages of the new technology became widely appreciated and the image quality of moderately priced digital cameras was continually improved.

Etymology
The coining of the word “photography” is usually attributed to Sir John Herschel in 1839. It is based on the Greek φῶς (phos), (genitive: phōtós) meaning “light”, and γραφή (graphê), meaning “drawing, writing”, together meaning “drawing with light”.[2]

Technological background

A camera obscura used for drawing images
Photography is the result of combining several different technical discoveries. Long before the first photographs were made, Chinese philosopher Mo Ti and Greek mathematicians Aristotle and Euclid described a pinhole camera in the 5th and 4th centuries BCE.[3][4] In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments[5]

Ibn al-Haytham (Alhazen) (965 in Basra – c. 1040 in Cairo) studied the camera obscura and pinhole camera,[4][6] Albertus Magnus (1193/1206–80) discovered silver nitrate, and Georges Fabricius (1516–71) discovered silver chloride. Daniel Barbaro described a diaphragm in 1568. Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694. The novel Giphantie (by the French Tiphaigne de la Roche, 1729–74) described what could be interpreted as photography.

Development of chemical photography
Monochrome process

Earliest known surviving heliographic engraving, 1825, printed from a metal plate made by Joseph Nicéphore Niépce with his “heliographic process”.[7] The plate was exposed under an ordinary engraving and copied it by photographic means. This was a step towards the first permanent photograph from nature taken with a camera obscura.
Around the year 1800, Thomas Wedgwood made the first known attempt to capture the image in a camera obscura by means of a light-sensitive substance. He used paper or white leather treated with silver nitrate. Although he succeeded in capturing the shadows of objects placed on the surface in direct sunlight, and even made shadow-copies of paintings on glass, it was reported in 1802 that “[t]he images formed by means of a camera obscura have been found too faint to produce, in any moderate time, an effect upon the nitrate of silver.” The shadow images eventually darkened all over because “[n]o attempts that have been made to prevent the uncoloured part of the copy or profile from being acted upon by light have as yet been successful.”[8] Wedgwood may have prematurely abandoned his experiments due to frail and failing health; he died aged 34 in 1805.

“Boulevard du Temple”, a daguerreotype made by Louis Daguerre in 1838, is generally accepted as the earliest photograph to include people. It is a view of a busy street, but because the exposure time was at least ten minutes the moving traffic left no trace. Only the two men near the bottom left corner, one apparently having his boots polished by the other, stayed in one place long enough to be visible.
In 1816 Nicéphore Niépce, using paper coated with silver chloride, succeeded in photographing the images formed in a small camera, but the photographs were negatives, darkest where the camera image was lightest and vice versa, and they were not permanent in the sense of being reasonably light-fast; like earlier experimenters, Niépce could find no way to prevent the coating from darkening all over when it was exposed to light for viewing. Disenchanted with silver salts, he turned his attention to light-sensitive organic substances.[9]

Robert Cornelius, self-portrait, Oct. or Nov. 1839, approximate quarter plate daguerreotype. The back reads, “The first light picture ever taken.”

One of the oldest photographic portraits known, made by Joseph Draper of New York, in 1839[10] or 1840, of his sister, Dorothy Catherine Draper.
The oldest surviving permanent photograph of the image formed in a camera was created by Niépce in 1826 or 1827.[1] It was made on a polished sheet of pewter and the light-sensitive substance was a thin coating of bitumen, a naturally occurring petroleum tar, which was dissolved in lavender oil, applied to the surface of the pewter and allowed to dry before use.[11] After a very long exposure in the camera (traditionally said to be eight hours, but in fact probably several days),[12] the bitumen was sufficiently hardened in proportion to its exposure to light that the unhardened part could be removed with a solvent, leaving a positive image with the light regions represented by hardened bitumen and the dark regions by bare pewter.[11] To see the image plainly, the plate had to be lit and viewed in such a way that the bare metal appeared dark and the bitumen relatively light.[9]

In partnership, Niépce (in Chalon-sur-Saône) and Louis Daguerre (in Paris) refined the bitumen process,[13] substituting a more sensitive resin and a very different post-exposure treatment that yielded higher-quality and more easily viewed images. Exposure times in the camera, although somewhat reduced, were still measured in hours.[9]

In 1833 Niépce died suddenly, leaving his notes to Daguerre. More interested in silver-based processes than Niépce had been, Daguerre experimented with photographing camera images directly onto a mirror-like silver-surfaced plate that had been fumed with iodine vapor, which reacted with the silver to form a coating of silver iodide. As with the bitumen process, the result appeared as a positive when it was suitably lit and viewed. Exposure times were still impractically long until Daguerre made the pivotal discovery that an invisibly slight or “latent” image produced on such a plate by a much shorter exposure could be “developed” to full visibility by mercury fumes. This brought the required exposure time down to a few minutes under optimum conditions. A strong hot solution of common salt served to stabilize or fix the image by removing the remaining silver iodide. On 7 January 1839, this first complete practical photographic process was announced at a meeting of the French Academy of Sciences,[14] and the news quickly spread. At first, all details of the process were withheld and specimens were shown only at Daguerre’s studio, under his close supervision, to Academy members and other distinguished guests.[15] Arrangements were made for the French government to buy the rights in exchange for pensions for Niépce’s son and Daguerre and present the invention to the world (with the de facto exception of Great Britain) as a free gift.[16] Complete instructions were published on 19 August 1839.[17]

After reading early reports of Daguerre’s invention, William Henry Fox Talbot, who had succeeded in creating stabilized photographic negatives on paper in 1835, worked on perfecting his own process. In early 1839 he acquired a key improvement, an effective fixer, from John Herschel, the astronomer, who had previously shown that hyposulfite of soda (commonly called “hypo” and now known formally as sodium thiosulfate) would dissolve silver salts.[18] News of this solvent also reached Daguerre, who quietly substituted it for his less effective hot salt water treatment.[19]

A calotype print showing the American photographer Frederick Langenheim (circa 1849). Note, the caption on the photo calls the process Talbotype
Talbot’s early silver chloride “sensitive paper” experiments required camera exposures of an hour or more. In 1840, Talbot invented the calotype process, which, like Daguerre’s process, used the principle of chemical development of a faint or invisible “latent” image to reduce the exposure time to a few minutes. Paper with a coating of silver iodide was exposed in the camera and developed into a translucent negative image. Unlike a daguerreotype, which could only be copied by rephotographing it with a camera, a calotype negative could be used to make a large number of positive prints by simple contact printing. The calotype had yet another distinction compared to other early photographic processes, in that the finished product lacked fine clarity due to its translucent paper negative. This was seen as a positive attribute for portraits because it softened the appearance of the human face. Talbot patented this process,[20] which greatly limited its adoption, and spent many years pressing lawsuits against alleged infringers. He attempted to enforce a very broad interpretation of his patent, earning himself the ill will of photographers who were using the related glass-based processes later introduced by other inventors, but he was eventually defeated. Nonetheless, Talbot’s developed-out silver halide negative process is the basic technology used by chemical film cameras today. Hippolyte Bayard had also developed a method of photography but delayed announcing it, and so was not recognized as its inventor.

In 1839, John Herschel made the first glass negative, but his process was difficult to reproduce. Slovene Janez Puhar invented a process for making photographs on glass in 1841; it was recognized on June 17, 1852 in Paris by the Académie Nationale Agricole, Manufacturière et Commerciale.[21] In 1847, Nicephore Niépce’s cousin, the chemist Niépce St. Victor, published his invention of a process for making glass plates with an albumen emulsion; the Langenheim brothers of Philadelphia and John Whipple and William Breed Jones of Boston also invented workable negative-on-glass processes in the mid-1840s.[22]

In 1851 Frederick Scott Archer invented the collodion process.[citation needed] Photographer and children’s author Lewis Carroll used this process. (Carroll refers to the process as “Tablotype” [sic] in the story “A Photographer’s Day Out”)[23]

Roger Fenton’s assistant seated on Fenton’s photographic van, Crimea, 1855.
Herbert Bowyer Berkeley experimented with his own version of collodion emulsions after Samman introduced the idea of adding dithionite to the pyrogallol developer.[citation needed] Berkeley discovered that with his own addition of sulfite, to absorb the sulfur dioxide given off by the chemical dithionite in the developer, that dithionite was not required in the developing process. In 1881 he published his discovery. Berkeley’s formula contained pyrogallol, sulfite and citric acid. Ammonia was added just before use to make the formula alkaline. The new formula was sold by the Platinotype Company in London as Sulpho-Pyrogallol Developer.[24]

Nineteenth-century experimentation with photographic processes frequently became proprietary. The German-born, New Orleans photographer Theodore Lilienthal successfully sought legal redress in an 1881 infringement case involving his “Lambert Process” in the Eastern District of Louisiana.

Popularization

General view of The Crystal Palace at Sydenham by Philip Henry Delamotte, 1854

Mid 19th century “Brady stand” photo model’s armrest table, meant to keep portrait models more still during long exposure times (studio equipment nicknamed after the famed US photographer, Mathew Brady)

1855 cartoon satirizing problems with posing for Daguerreotypes: slight movement during exposure resulted in blurred features, red-blindness made rosy complexions dark.

A photographer appears to be photographing himself in a 19th-century photographic studio. Note clamp to hold the poser’s head still. An 1893 satire on photographic procedures already becoming obsolete at the time.

A comparison of common print sizes used in photographic studios during the 19th century
The daguerreotype proved popular in response to the demand for portraiture that emerged from the middle classes during the Industrial Revolution.[citation needed] This demand, which could not be met in volume and in cost by oil painting, added to the push for the development of photography.

In 1847, Count Sergei Lvovich Levitsky designed a bellows camera that significantly improved the process of focusing. This adaptation influenced the design of cameras for decades and is still found in use today in some professional cameras. While in Paris, Levitsky would become the first to introduce interchangeable decorative backgrounds in his photos, as well as the retouching of negatives to reduce or eliminate technical deficiencies.[citation needed] Levitsky was also the first photographer to portray a photo of a person in different poses and even in different clothes (for example, the subject plays the piano and listens to himself).[citation needed]

Roger Fenton and Philip Henry Delamotte helped popularize the new way of recording events, the first by his Crimean war pictures, the second by his record of the disassembly and reconstruction of The Crystal Palace in London. Other mid-nineteenth-century photographers established the medium as a more precise means than engraving or lithography of making a record of landscapes and architecture: for example, Robert Macpherson’s broad range of photographs of Rome, the interior of the Vatican, and the surrounding countryside became a sophisticated tourist’s visual record of his own travels.

By 1849, images captured by Levitsky on a mission to the Caucasus were exhibited by the famous Parisian optician Chevalier at the Paris Exposition of the Second Republic as an advertisement of their lenses. These photos would receive the Exposition’s gold medal; the first time a prize of its kind had ever been awarded to a photograph.[citation needed]

That same year in 1849 in his St. Petersburg, Russia studio Levitsky would first propose the idea to artificially light subjects in a studio setting using electric lighting along with daylight. He would say of its use, “as far as I know this application of electric light has never been tried; it is something new, which will be accepted by photographers because of its simplicity and practicality”.[citation needed]

In 1851, at an exhibition in Paris, Levitsky would win the first ever gold medal awarded for a portrait photograph.[citation needed]

In America, by 1851 a broadside by daguerreotypist Augustus Washington was advertising prices ranging from 50 cents to $10.[25] However, daguerreotypes were fragile and difficult to copy. Photographers encouraged chemists to refine the process of making many copies cheaply, which eventually led them back to Talbot’s process.

Ultimately, the photographic process came about from a series of refinements and improvements in the first 20 years. In 1884 George Eastman, of Rochester, New York, developed dry gel on paper, or film, to replace the photographic plate so that a photographer no longer needed to carry boxes of plates and toxic chemicals around. In July 1888 Eastman’s Kodak camera went on the market with the slogan “You press the button, we do the rest”. Now anyone could take a photograph and leave the complex parts of the process to others, and photography became available for the mass-market in 1901 with the introduction of the Kodak Brownie.

Color photography

The first durable color photograph, taken by Thomas Sutton in 1861
A practical means of color photography was sought from the very beginning. Results were demonstrated by Edmond Becquerel as early as 1848, but exposures lasting for hours or days were required and the captured colors were so light-sensitive they would only bear very brief inspection in dim light.

The first durable color photograph was a set of three black-and-white photographs taken through red, green and blue color filters and shown superimposed by using three projectors with similar filters. It was taken by Thomas Sutton in 1861 for use in a lecture by the Scottish physicist James Clerk Maxwell, who had proposed the method in 1855.[26] The photographic emulsions then in use were insensitive to most of the spectrum, so the result was very imperfect and the demonstration was soon forgotten. Maxwell’s method is now most widely known through the early 20th century work of Sergei Prokudin-Gorskii. It was made practical by Hermann Wilhelm Vogel’s 1873 discovery of a way to make emulsions sensitive to the rest of the spectrum, gradually introduced into commercial use beginning in the mid-1880s.

Two French inventors, Louis Ducos du Hauron and Charles Cros, working unknown to each other during the 1860s, famously unveiled their nearly identical ideas on the same day in 1869. Included were methods for viewing a set of three color-filtered black-and-white photographs in color without having to project them, and for using them to make full-color prints on paper.[27]

The first widely used method of color photography was the Autochrome plate, commercially introduced in 1907. It was based on one of Louis Ducos du Hauron’s ideas: instead of taking three separate photographs through color filters, take one through a mosaic of tiny color filters overlaid on the emulsion and view the results through an identical mosaic. If the individual filter elements were small enough, the three primary colors would blend together in the eye and produce the same additive color synthesis as the filtered projection of three separate photographs. Autochrome plates had an integral mosaic filter layer composed of millions of dyed potato starch grains. Reversal processing was used to develop each plate into a transparent positive that could be viewed directly or projected with an ordinary projector. The mosaic filter layer absorbed about 90 percent of the light passing through, so a long exposure was required and a bright projection or viewing light was desirable. Competing screen plate products soon appeared and film-based versions were eventually made. All were expensive and until the 1930s none was “fast” enough for hand-held snapshot-taking, so they mostly served a niche market of affluent advanced amateurs.
35 mm

Leica I, 1925

Argus C3, 1939
See also: History of 135 film
A number of manufacturers started to use 35mm film for still photography between 1905 and 1913. The first 35mm cameras available to the public, and reaching significant numbers in sales were the Tourist Multiple, in 1913, and the Simplex, in 1914.[citation needed]

Oskar Barnack, who was in charge of research and development at Leitz, decided to investigate using 35 mm cine film for still cameras while attempting to build a compact camera capable of making high-quality enlargements. He built his prototype 35 mm camera (Ur-Leica) around 1913, though further development was delayed for several years by World War I. It wasn’t until after World War I that Leica commercialized their first 35mm Cameras. Leitz test-marketed the design between 1923 and 1924, receiving enough positive feedback that the camera was put into production as the Leica I (for Leitz camera) in 1925. The Leica’s immediate popularity spawned a number of competitors, most notably the Contax (introduced in 1932), and cemented the position of 35 mm as the format of choice for high-end compact cameras.

Kodak got into the market with the Retina I in 1934, which introduced the 135 cartridge used in all modern 35 mm cameras. Although the Retina was comparatively inexpensive, 35 mm cameras were still out of reach for most people and rollfilm remained the format of choice for mass-market cameras. This changed in 1936 with the introduction of the inexpensive Argus A and to an even greater extent in 1939 with the arrival of the immensely popular Argus C3. Although the cheapest cameras still used rollfilm, 35 mm film had come to dominate the market by the time the C3 was discontinued in 1966.

The fledgling Japanese camera industry began to take off in 1936 with the Canon 35 mm rangefinder, an improved version of the 1933 Kwanon prototype. Japanese cameras would begin to become popular in the West after Korean War veterans and soldiers stationed in Japan brought them back to the United States and elsewhere.
A new era in color photography began with the introduction of Kodachrome film, available for 16 mm home movies in 1935 and 35 mm slides in 1936. It captured the red, green and blue color components in three layers of emulsion. A complex processing operation produced complementary cyan, magenta and yellow dye images in those layers, resulting in a subtractive color image. Maxwell’s method of taking three separate filtered black-and-white photographs continued to serve special purposes into the 1950s and beyond, and Polachrome, an “instant” slide film that used the Autochrome’s additive principle, was available until 2003, but the few color print and slide films still being made in 2015 all use the multilayer emulsion approach pioneered by Kodachrome.
Digital cameras
See also: Dslr § History
Digital cameras differ from their analog predecessors primarily in that they do not use film, but capture and save photographs on digital memory cards or internal storage instead. Their low operating costs have relegated chemical cameras to niche markets. Digital cameras now include wireless communication capabilities (for example Wi-Fi or Bluetooth) to transfer, print or share photos, and are commonly found on mobile phones.

Early development
The concept of digitizing images on scanners, and the concept of digitizing video signals, predate the concept of making still pictures by digitizing signals from an array of discrete sensor elements. Early spy satellites used the extremely complex and expensive method of de-orbit and airborne retrieval of film canisters. Technology was pushed to skip these steps through the use of in-satellite developing and electronic scanning of the film for direct transmission to the ground. The amount of film was still a major limitation, and this was overcome and greatly simplified by the push to develop an electronic image capturing array that could be used instead of film. The first electronic imaging satellite was the KH-11 launched by the NRO in late 1976. It had a charge-coupled device (CCD) array with a resolution of 800 x 800 pixels (0.64 megapixels).[13] At Philips Labs in New York, Edward Stupp, Pieter Cath and Zsolt Szilagyi filed for a patent on “All Solid State Radiation Imagers” on 6 September 1968 and constructed a flat-screen target for receiving and storing an optical image on a matrix composed of an array of photodiodes connected to a capacitor to form an array of two terminal devices connected in rows and columns. Their US patent was granted on 10 November 1970.[14] Texas Instruments engineer Willis Adcock designed a filmless camera that was not digital and applied for a patent in 1972, but it is not known whether it was ever built.[15] The first recorded attempt at building a digital camera was in 1975 by Steven Sasson, an engineer at Eastman Kodak.[16][17] It used the then-new solid-state CCD image sensor chips developed by Fairchild Semiconductor in 1973.[18] The camera weighed 8 pounds (3.6 kg), recorded black and white images to a compact cassette tape, had a resolution of 0.01 megapixels (10,000 pixels), and took 23 seconds to capture its first image in December 1975. The prototype camera was a technical exercise, not intended for production.

tmjs ch2
ch2 retro cam and film

LIGHTING

<iframe style=”width:120px;height:240px;” marginwidth=”0″ marginheight=”0″ scrolling=”no” frameborder=”0″ src=”//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=tf_til&ad_type=product_link&tracking_id=wwwsmokingm0f-20&marketplace=amazon&region=US&placement=B00E4YS2XU&asins=B00E4YS2XU&linkId=2UF6NNYEYRIQU3WW&show_border=true&link_opens_in_new_window=true”>
</iframe>

In the early days of photography the only source of light was, of course, the sun, so most photography depended upon long days and good weather. It is said that Rejlander used a cat as a primitive exposure meter: placing the cat where the sitter should be, he judged by looking at its eyes whether it was worth taking any photographs or whether his sitter should go home and wait for better times! The nearer to the birth of photography, the greater the amount of lighting needed, as the first chemical emulsions were very insensitive.

The first artificial light photography dates back as far as 1839, when L. Ibbetson used oxy-hydrogen light (also known as limelight) when photographing microscopic objects; he made a daguerreotype in five minutes which, he claimed, would have taken twenty-five minutes in normal daylight.

Other possibilities were explored. Nadar, for example, photographed the sewers in Paris, using battery-operated lighting. Later arc-lamps were introduced, but it was not until 1877 that the first studio lit by electric light was opened by Van der Weyde, who had a studio in Regent Street. Powered by a gas-driven dynamo, the light was sufficient to permit exposures of some 2 to 3 seconds for a carte-de-visite.

Soon a number of studios started using arc lighting. One advert (by Arthur Langton, working in Belgravia, London), boldly proclaims:

“My electric light installation is perhaps the more powerful in London. Photographs superior to daylight, Pictures can now be taken in any weather and at any time.”

More from Arthur Langton’s advertisement:

“CAUTION Many photographers advertise ‘portrits taken by electric light’ but 9 out of 10 do not possess an electric light, owing to its costlinss they use an inferior and nasty substitute… a pyrotechnic powder which gives off poisonos fumes.”

(His spelling, by the way!)

In June 1850 an experiment conducted by Fox Talbot, probably using static electricity stored in Leyden jars, was conducted at the Royal Society: a page of The Times was fastened on to a wheel, which then revolved rapidly. Writing about this the following year Fox Talbot stated:

“From this experiment the conclusion…is that it is within our power to obtain pictures of all moving objects….providing we have the means of sufficiently illuminating them with a sudden electric flash.”

The object then had been to arrest fast action. A few years later William Crookes, editor of the Photographic News (October 1859) was responding to a query put to him on how to light some caves:

“A…brilliant light…can be obtained by burning….magnesium in oxygen. A piece of magnesium wire held by one end in the hand, may be lighted at the other extremity by holding it to a candle… It then burns away of its own accord evolving a light insupportably brilliant to the unprotected eye….”

That same year Professor Robert Bunsen (of Bunsen burner fame) was also advocating the use of magnesium. The first portrait using magnesium was taken by Alfred Brothers of Manchester (22 February 1864); some of the results of his experiments may be found in the Manchester Museum of Science and Technology. It was however very expensive at that time and did not come into general use until there was a dramatic fall in the cost of magnesium a decade later. This, coupled with the introduction of dry plates in the 80s soon led to the introduction of magnesium flashlamps. They all used the same principle: a small amount of this powder would be blown, using a small rubber pump, through a spirit flame, producing a bright flash lasting about 1/15s. It also produced much smoke and ash!

Then in the late 1880s it was discovered that magnesium powder, if mixed with an oxidising agent such as potassium chlorate, would ignite with very little persuasion. This led to the introduction of flash powder. It would be spread on a metal dish the flash powder would be set of by percussion – sparks from a flint wheel, electrical fuse or just by applying a taper. However the explosive flashpowder could be quite dangerous if misused. This was not really superseded until the invention of the flashbulb in the late 1920s.

Early flash photography was not synchronised. This meant that one had to put a camera on a tripod, open the shutter, trigger the flash, and close the shutter again – a technique known as open flash.

Certainly early flash photography could be a hazardous business. It is said, for example, that Riis, working during this period, twice managed to set the places he was photographing on fire!

In fact, the “open flash” technique, with flash powder, was still being used by some photographers until the 1950s. This was particularly so when, for example, a large building was being photographed; with someone operating the shutter for multiple exposures, it was possible to use the flash at different places, to provide more even illumination.

By varying the amount of grammes of flash-powder, the distance covered could also be varied. To give some idea, using a panchromatic film of about 25ASA and open flash technique, at f8, a measure of 0.1 grammes of flash would permit the flash-subject idstance to be about 8 feet, whilst 2.0 grammes would permit an exposure 30 feet away. The earliest known flash bulb was described in 1883. It consisted of a two pint stoppered bottle which had white paper stuck on it to act as a reflector. To set the flash off, a spiral of ten or so inches of magnesium on a wire skewer was pre-lighted and plunged into the oxygen.

It was not to be until 1927 that the simple flash-bulb was to appear, and 1931 when Harold Egerton produced the first electronic flash tube.

Makeup

HISTORY

Makeup has a long theatrical history. The early film industry naturally looked to traditional stage techniques, but these proved inadequate almost immediately. One of makeup’s first problems was with celluloid. Early filmmakers used orthochromatic film stock, which had a limited color-range sensitivity. It reacted to red pigmentation, darkening white skin and nullifying solid reds. To counter the effect, Caucasian actors wore heavy pink greasepaint (Stein’s #2) as well as black eyeliner and dark red lipstick (which, if applied too lightly, appeared white on screen), but these masklike cosmetics smeared as actors sweated under the intense lights. Furthermore, until the mid-teens, actors applied their own makeup and their image was rarely uniform from scene to scene. As the close-up became more common, makeup focused on the face, which had to be understood from a hugely magnified perspective, making refinements essential. In the pursuit of these radical changes, two names stand out as Hollywood’s progenitor artists: Max Factor (1877–1938) and George Westmore (1879–1931). Both started as wigmakers and both recognized that the crucial difference between stage and screen was a lightness of touch. Both invented enduring cosmetics and makeup tricks for cinema and each, at times, took credit for the same invention (such as false eyelashes).

Factor (originally Firestein), a Russian émigré with a background in barbering, arrived in the United States in 1904 and moved to Los Angeles in 1908, where he set up a perfume, hair care, and cosmetics business catering to theatrical needs. He also distributed well-known greasepaints, which were too thick for screen use and photographed badly. By 1910, Factor had begun to divide the theatrical from the cinematic as he experimented to find appropriate cosmetics for film. His Greasepaint was the first makeup used in a screen test, for Cleopatra (1912), and by 1914 Factor had invented a twelve-toned cream version, which applied thinly, allowed for individual skin subtleties, and conformed more comfortably with celluloid. In the early 1920s panchromatic film began to replace orthochromatic, causing fewer color flaws, and in 1928 Factor completed work on Panchromatic MakeUp, which had a variety of hues. In 1937, the year before he died, he dealt with the new Technicolor problems by adapting theatrical “pancake” into a water-soluble powder, applicable with a sponge, excellent for film’s and, eventually, television’s needs. It photographed very well, eliminating the shine induced by Technicolor lighting, and its basic translucence imparted a delicate look. Known as Pancake makeup, it was first used in Vogues of 1938 (1937) and Goldwyn’s Follies (1938), quickly becoming not only the film industry norm but a public sensation. Once movie stars, delighting in its lightness, began to wear it offscreen, Pancake became de rigueur for fashion-conscious women. After Factor’s death, his empire continued to set standards and still covers cinema’s cosmetic needs, from fingernails to toupees.

The English wigmaker George Westmore, for whom the Makeup Artist and Hair Stylist Guild’s George Westmore Lifetime Achievement Award is named, founded the first (and tiny) film makeup department, at Selig Studio in 1917. He also worked at Triangle but soon was freelancing across the major studios. Like Factor, he understood that cosmetic and hair needs were personal and would make up stars such as Mary Pickford (whom he relieved of having to curl her famous hair daily by making false ringlets) or the Talmadge sisters in their homes before they left for work in the morning.

He fathered three legendary and scandalous generations of movie makeup artists, beginning with his six sons—Monte (1902–1940), Perc (1904–1970), Ern (1904–1967), Wally (1906–1973), Bud (1918–1973), and Frank (1923–1985)—who soon eclipsed him in Hollywood. By 1926, Monte, Perc, Ern, and Bud had penetrated the industry to become the chief makeup artists at four major studios, and all continued to break ground in new beauty and horror illusions until the end of their careers. In 1921, after dishwashing at Famous Players-Lasky, Monte became Rudolph Valentino’s sole makeup artist. (The actor had been doing his own.) When Valentino died in 1926, Monte went to Selznick International where, thirteen years later, he worked himself to death with the enormous makeup demands for Gone With the Wind (1939). In 1923 Perc established a blazing career at First National-Warner Bros. and, over twenty-seven years, initiated beauty trends and disguises including, in 1939, the faces of Charles Laughton’s grotesque Hunchback of Notre Dame (for RKO) and Bette Davis’s eyebrowless, almost bald, whitefaced Queen Elizabeth. In the early 1920s he blended Stein Pink greasepaint with eye shadow, preceding Factor’s Panchromatic. Ern, at RKO from 1929 to 1931 and then at Fox from 1935, was adept at finding the right look for stars of the 1930s. Wally headed Paramount makeup from 1926, where he created, among others, Frederic March’s gruesome transformation in Dr. Jekyl and Mr. Hyde (1931). Frank followed him there. Bud led Universal’s makeup department for twenty-three years, specializing in rubber prosthetics and body suits such as the one used in Creature from the Black Lagoon (1954). Together they built the House of Westmore salon, which served stars and public alike.
Later generations have continued the name, including Bud’s sons, Michael and Marvin Westmore, who began in television and have excelled in unusual makeup, such as in Blade Runner (1982).

MGM was the only studio that the Westmores did not rule. Cecil Holland (1887–1973) became its first makeup head in 1925 and remained there until the 1950s. Originally an English actor known as “The Man of a Thousand Faces” before Lon Chaney (1883–1930) inherited the title, his makeup abilities were pioneering on films such as Grand Hotel (1932) and The Good Earth (1937). Jack Dawn (1892–1961), who created makeup for The Wizard of Oz (1939), ran the department from the 1940s, by which time it was so huge that over a thousand actors could be made up in one hour. William

Lon Chaney did his own makeup for Phantom of the Opera (Rupert Julian, 1925).
Tuttle succeeded him and ran the department for twenty years. Like Holland, Chaney was another actor with supernal makeup skills whose horror and crime films became classics, notably for Chaney’s menacing but realistically based disguises. He always created his own makeup, working with the materials of his day—greasepaint, putty, plasto (mortician’s wax), fish skin, gutta percha (natural resin), collodian (liquid elastic), and crepe hair—and conjured characters unrivalled in their horrifying effect, including his gaunt, pig-nosed, black-eyed Phantom for Phantom of the Opera (1925) and his Hunchback in The Hunchback of Notre Dame (1923), for which he constructed agonizingly heavy makeup and body harnesses.

tmjs ch 3
ch3 digital
Digital cameras
See also: Dslr § History
Digital cameras differ from their analog predecessors primarily in that they do not use film, but capture and save photographs on digital memory cards or internal storage instead. Their low operating costs have relegated chemical cameras to niche markets. Digital cameras now include wireless communication capabilities (for example Wi-Fi or Bluetooth) to transfer, print or share photos, and are commonly found on mobile phones.

Early development
The concept of digitizing images on scanners, and the concept of digitizing video signals, predate the concept of making still pictures by digitizing signals from an array of discrete sensor elements. Early spy satellites used the extremely complex and expensive method of de-orbit and airborne retrieval of film canisters. Technology was pushed to skip these steps through the use of in-satellite developing and electronic scanning of the film for direct transmission to the ground. The amount of film was still a major limitation, and this was overcome and greatly simplified by the push to develop an electronic image capturing array that could be used instead of film. The first electronic imaging satellite was the KH-11 launched by the NRO in late 1976. It had a charge-coupled device (CCD) array with a resolution of 800 x 800 pixels (0.64 megapixels).[13] At Philips Labs in New York, Edward Stupp, Pieter Cath and Zsolt Szilagyi filed for a patent on “All Solid State Radiation Imagers” on 6 September 1968 and constructed a flat-screen target for receiving and storing an optical image on a matrix composed of an array of photodiodes connected to a capacitor to form an array of two terminal devices connected in rows and columns. Their US patent was granted on 10 November 1970.[14] Texas Instruments engineer Willis Adcock designed a filmless camera that was not digital and applied for a patent in 1972, but it is not known whether it was ever built.[15] The first recorded attempt at building a digital camera was in 1975 by Steven Sasson, an engineer at Eastman Kodak.[16][17] It used the then-new solid-state CCD image sensor chips developed by Fairchild Semiconductor in 1973.[18] The camera weighed 8 pounds (3.6 kg), recorded black and white images to a compact cassette tape, had a resolution of 0.01 megapixels (10,000 pixels), and took 23 seconds to capture its first image in December 1975. The prototype camera was a technical exercise, not intended for production.

Development of digital photography
Main article: Digital photography
In 1957, a team led by Russell A. Kirsch at the National Institute of Standards and Technology developed a binary digital version of an existing technology, the wirephoto drum scanner, so that alphanumeric characters, diagrams, photographs and other graphics could be transferred into digital computer memory. One of the first photographs scanned was a picture of Kirsch’s infant son Walden. The resolution was 176×176 pixels with only one bit per pixel, i.e., stark black and white with no intermediate gray tones, but by combining multiple scans of the photograph done with different black-white threshold settings, grayscale information could also be acquired.[28]

The charge-coupled device (CCD) is the image-capturing optoelectronic component in first-generation digital cameras. It was invented in 1969 by Willard Boyle and George E. Smith at AT&T Bell Labs as a memory device. The lab was working on the Picturephone and on the development of semiconductor bubble memory. Merging these two initiatives, Boyle and Smith conceived of the design of what they termed “Charge ‘Bubble’ Devices”. The essence of the design was the ability to transfer charge along the surface of a semiconductor. It was Dr. Michael Tompsett from Bell Labs however, who discovered that the CCD could be used as an imaging sensor. The CCD has increasingly been replaced by the active pixel sensor (APS), commonly used in cell phone cameras.
Analog electronic cameras

Sony Mavica, 1981
Main article: Still video camera
Handheld electronic cameras, in the sense of a device meant to be carried and used like a handheld film camera, appeared in 1981 with the demonstration of the Sony Mavica (Magnetic Video Camera). This is not to be confused with the later cameras by Sony that also bore the Mavica name. This was an analog camera, in that it recorded pixel signals continuously, as videotape machines did, without converting them to discrete levels; it recorded television-like signals to a 2 × 2 inch “video floppy”.[19] In essence it was a video movie camera that recorded single frames, 50 per disk in field mode and 25 per disk in frame mode. The image quality was considered equal to that of then-current televisions.

Canon RC-701, 1986
Analog electronic cameras do not appear to have reached the market until 1986 with the Canon RC-701. Canon demonstrated a prototype of this model at the 1984 Summer Olympics, printing the images in the Yomiuri Shinbun, a Japanese newspaper. In the United States, the first publication to use these cameras for real reportage was USA Today, in its coverage of World Series baseball. Several factors held back the widespread adoption of analog cameras; the cost (upwards of $20,000), poor image quality compared to film, and the lack of quality affordable printers. Capturing and printing an image originally required access to equipment such as a frame grabber, which was beyond the reach of the average consumer. The “video floppy” disks later had several reader devices available for viewing on a screen, but were never standardized as a computer drive.

The early adopters tended to be in the news media, where the cost was negated by the utility and the ability to transmit images by telephone lines. The poor image quality was offset by the low resolution of newspaper graphics. This capability to transmit images without a satellite link was useful during the Tiananmen Square protests of 1989 and the first Gulf War in 1991.

US government agencies also took a strong interest in the still video concept, notably the US Navy for use as a real time air-to-sea surveillance system.

The first analog electronic camera marketed to consumers may have been the Casio VS-101 in 1987. A notable analog camera produced the same year was the Nikon QV-1000C, designed as a press camera and not offered for sale to general users, which sold only a few hundred units. It recorded images in greyscale, and the quality in newspaper print was equal to film cameras. In appearance it closely resembled a modern digital single-lens reflex camera. Images were stored on video floppy disks.

Silicon Film, a proposed digital sensor cartridge for film cameras that would allow 35 mm cameras to take digital photographs without modification was announced in late 1998. Silicon Film was to work like a roll of 35 mm film, with a 1.3 megapixel sensor behind the lens and a battery and storage unit fitting in the film holder in the camera. The product, which was never released, became increasingly obsolete due to improvements in digital camera technology and affordability. Silicon Films’ parent company filed for bankruptcy in 2001.[20]

Arrival of true digital cameras

The first portable digital SLR camera, introduced by Minolta in 1995.

Nikon D1, 1999
By the late 1980s, the technology required to produce truly commercial digital cameras existed. The first true portable digital camera that recorded images as a computerized file was likely the Fuji DS-1P of 1988, which recorded to a 2 MB SRAM memory card that used a battery to keep the data in memory. This camera was never marketed to the public.

The first digital camera of any kind ever sold commercially was possibly the MegaVision Tessera in 1987 [21] though there is not extensive documentation of its sale known. The first portable digital camera that was actually marketed commercially was sold in December 1989 in Japan, the DS-X by Fuji[22] The first commercially available portable digital camera in the United States was the Dycam Model 1, first shipped in November 1990.[23] It was originally a commercial failure because it was black and white, low in resolution, and cost nearly $1,000 (about $2000 in 2014).[24] It later saw modest success when it was re-sold as the Logitech Fotoman in 1992. It used a CCD image sensor, stored pictures digitally, and connected directly to a computer for download.[25][26][27]

In 1991, Kodak brought to market the Kodak DCS (Kodak Digital Camera System), the beginning of a long line of professional Kodak DCS SLR cameras that were based in part on film bodies, often Nikons. It used a 1.3 megapixel sensor, had a bulky external digital storage system and was priced at $13,000. At the arrival of the Kodak DCS-200, the Kodak DCS was dubbed Kodak DCS-100.

The move to digital formats was helped by the formation of the first JPEG and MPEG standards in 1988, which allowed image and video files to be compressed for storage. The first consumer camera with a liquid crystal display on the back was the Casio QV-10 developed by a team lead by Hiroyuki Suetaka in 1995. The first camera to use CompactFlash was the Kodak DC-25 in 1996.[citation needed]. The first camera that offered the ability to record video clips may have been the Ricoh RDC-1 in 1995.

In 1995 Minolta introduced the RD-175, which was based on the Minolta 500si SLR with a splitter and three independent CCDs. This combination delivered 1.75M pixels. The benefit of using an SLR base was the ability to use any existing Minolta AF mount lens. 1999 saw the introduction of the Nikon D1, a 2.74 megapixel camera that was the first digital SLR developed entirely from the ground up by a major manufacturer, and at a cost of under $6,000 at introduction was affordable by professional photographers and high-end consumers. This camera also used Nikon F-mount lenses, which meant film photographers could use many of the same lenses they already owned.

Digital camera sales continued to flourish, driven by technology advances. The digital market segmented into different categories, Compact Digital Still Cameras, Bridge Cameras, Mirrorless Compacts and Digital SLRs. One of the major technology advances was the development of CMOS sensors, which helped drive sensor costs low enough to enable the widespread adoption of camera phones.

1973 – Fairchild Semiconductor releases the first large image-capturing CCD chip: 100 rows and 100 columns.[29]
1975 – Bryce Bayer of Kodak develops the Bayer filter mosaic pattern for CCD color image sensors
1986 – Kodak scientists develop the world’s first megapixel sensor.
The web has been a popular medium for storing and sharing photos ever since the first photograph was published on the web by Tim Berners-Lee in 1992 (an image of the CERN house band Les Horribles Cernettes). Today popular sites such as Flickr, Picasa, Instagram and PhotoBucket are used by millions of people to share their pictures.

“How To A How To Video” Ep#26

Hello… Welcome to the new TheMaryJaneStyle “How To A How To Video” Ep#26

tmjs26 pic

“How To A How To Video”
some say …
1.Do Your Research. “If you find something that lots of people want to know, and no one has already made a film of how to do it, then that’s a great title for a ‘how to’ film,”…
2.Get an Expert View. If you’re the expert, then great. …
3.Write a Great Script. …
4.Shoot It and Test It! …
5.Promote It.

images-19

images-20images-8 images-9
others say
How To Videos are in alternative terms are all linguistically more restrictive than “educational technology” in that they refer to the use of modern tools such as computers, digital technology, electronic media, networked digital devices, and associated software or “courseware” with learning scenarios, worksheets, and interactive exercises that facilitate learning. However, these alternative names individually emphasize a particular digitization approach, component or delivery method. Accordingly, each conflates to the broad domain of educational technology. For example, internet-learning emphasizes mobility, but is otherwise indistinguishable in principle from educational technology.


people like
Entertainment is a form of activity that holds the attention and interest of an audience, or gives pleasure and delight. It can be an idea or a task, but is more likely to be one of the activities or events that have developed over thousands of years specifically for the purpose of keeping an audience’s attention.[1] Although people’s attention is held by different things, because individuals have different preferences in entertainment, most forms are recognisable and familiar. Storytelling, music, drama, dance, and different kinds of performance exist in all cultures, were supported in royal courts, developed into sophisticated forms and over time became available to all citizens. The process has been accelerated in modern times by an entertainment industry which records and sells entertainment products. Entertainment evolves and can be adapted to suit any scale, ranging from an individual who chooses a private entertainment from a now enormous array of pre-recorded products; to a banquet adapted for two; to any size or type of party, with appropriate music and dance; to performances intended for thousands; and even for a global audience.

The experience of being entertained has come to be strongly associated with amusement, so that one common understanding of the idea is fun and laughter, although many entertainments have a serious purpose. This may be the case in the various forms of ceremony, celebration, religious festival, or satire for example. Hence, there is the possibility that what appears as entertainment may also be a means of achieving insight or intellectual growth.

An important aspect of entertainment is the audience, which turns a private recreation or leisure activity into entertainment. The audience may have a passive role, as in the case of persons watching a play, opera, television show, or film; or the audience role may be active, as in the case of games, where the participant/audience roles may be routinely reversed. Entertainment can be public or private, involving formal, scripted performance, as in the case of theatre or concerts; or unscripted and spontaneous, as in the case of children’s games. Most forms of entertainment have persisted over many centuries, evolving due to changes in culture, technology, and fashion. Films and video games, for example, although they use newer media, continue to tell stories, present drama, and play music. Festivals devoted to music, film, or dance allow audiences to be entertained over a number of consecutive days.

Some activities that once were considered entertaining, particularly public punishments, have been removed from the public arena. Others, such as fencing or archery, once necessary skills for some, have become serious sports and even professions for the participants, at the same time developing into entertainment with wider appeal for bigger audiences. In the same way, other necessary skills, such as cooking, have developed into performances among professionals, staged as global competitions and then broadcast for entertainment. What is entertainment for one group or individual may be regarded as work by another.

The familiar forms of entertainment have the capacity to cross over different media and have demonstrated a seemingly unlimited potential for creative remix. This has ensured the continuity and longevity of many themes, images, and structures.

or A Vivral Video, if Your Lucky=
Fame and public attention in the media, usually applied to a person, or group of people (celebrity couple, family etc.), or occasionally, to animals or fictional entities. Celebrity status is often associated with wealth (commonly referred to as fame and fortune) and fame can often provide opportunities to make money.

Successful careers in Viral Videos and entertainment are commonly associated with celebrity status political leaders often become celebrities. People may also become celebrities due to media attention for their lifestyle, wealth, or actions, or for their connection to a famous person.

also a form of
Art, is a diverse range of human activities and the products of those activities, usually involving imaginative or technical skill. In their most general form these activities include the production of works of art, the criticism of art, the study of the history of art, and the aesthetic dissemination of art. This article focuses primarily on the visual arts, which includes the creation of images or objects in fields including painting, sculpture, printmaking, photography, and other visual media. Architecture is often included as one of the visual arts; however, like the decorative arts, it involves the creation of objects where the practical considerations of use are essential—in a way that they usually are not in a painting, for example. Music, theatre, film, dance, and other performing arts, as well as literature and other media such as interactive media, are included in a broader definition of art or the arts. Until the 17th century, art referred to any skill or mastery and was not differentiated from crafts or sciences. In modern usage after the 17th century, where aesthetic considerations are paramount, the fine arts are separated and distinguished from acquired skills in general, such as the decorative or applied arts.

Art may be characterized in terms of mimesis (its representation of reality), expression, communication of emotion, or other qualities. During the Romantic period, art came to be seen as “a special faculty of the human mind to be classified with religion and science”. Though the definition of what constitutes art is disputed and has changed over time, general descriptions mention an idea of imaginative or technical skill stemming from human agency and creation.The nature of art, and related concepts such as creativity and interpretation, are explored in a branch of philosophy known as aesthetics.

ENJOY!!! & Subscribe to TheMaryJaneStyle on Youtube, Tumbler & follow on Twitter

(plus if you like to shop click Amazon links on this page, that provides me monies …

“HowTo” A Story! TMJS ep25

HowTo-A Story…

Shop Amazon – Introducing Prime Pantry – Everyday Essentials in Everyday Sizes

Act 1 establishing the characters & the world they live in as well as the chalenges they will face.(the more bizarre the world the more explanation needed.Act 1 ends with the protagonist accepting the challenges presented in Act1. + 5W’S

1.who =A character is a person in a narrative work of arts (such as a novel, play, television series or film). Or a intertaining or capitivating individual

2.what = A Story is communication, The history of communication dates back to prehistory, with significant changes in technologies evolving in tandem with shifts in political and economic systems, by extension, systems of power. Communication can range from very subtle processes of exchange, to full conversations and mass communication. Human communication was revolutionized with speech approximately 100,000 years ago. Symbols were developed about 30,000 years ago, and writing about 5000 years ago.

3.when & 4.where = time and geographic location in which a story takes place, and helps initiate the main backdrop and mood for a story. Setting has been referred to as story world or milieu to include a context (especially society) beyond the immediate surroundings of the story. Elements of setting may include culture, historical period, geography, and hour.

5.why = A Story is communication, The history of communication dates back to prehistory, with significant changes in technologies evolving in tandem with shifts in political and economic systems, by extension, systems of power. Communication can range from very subtle processes of exchange, to full conversations and mass communication. Human communication was revolutionized with speech approximately 100,000 years ago. Symbols were developed about 30,000 years ago, and writing about 5000 years ago.

Act 2 referred to as “rising action”, typically depicts the protagonist’s attempt to resolve the problem initiated by the first turning point, only to find him- or herself in ever worsening situations. Part of the reason protagonists seem unable to resolve their problems is because they do not yet have the skills to deal with the forces of antagonism that confront them. They must not only learn new skills but arrive at a higher sense of awareness of who they are and what they are capable of, in order to deal with their predicament, which in turn changes who they are. This is referred to as character development or a character arc. This cannot be achieved alone and they are usually aided and abetted by mentors and co-protagonists.

Act 3 features the resolution of the story and its subplots. The climax is the scene or sequence in which the main tensions of the story are brought to their most intense point and the dramatic question answered, leaving the protagonist and other characters with a new sense of who they really are.

History of Story’s
In spoken language analysis an utterance is a smallest unit of speech. It is a continuous piece of speech beginning and ending with a clear pause. In the case of oral languages, it is generally but not always bounded by silence. Utterances do not exist in written language, only their representations do. It can be represented and delineated in written language in many ways. Cuneiform script is one of the earliest known systems of writing,distinguished by its wedge-shaped marks on clay tablets, made by means of a blunt reed for a stylus. and came into English usage probably from Old French cunéiforme.
Wrihting is a large part of comunication. The early writing systems that emerged in Eurasia in the early 3rd millennium BC was not a sudden invention. Rather, it was a development based on earlier traditions of symbol systems. These systems may be described as proto-writing. They used ideographic or early mnemonic symbols to convey information yet were probably devoid of direct linguistic content. These systems emerged in the early Neolithic period, as early as the 7th millennium BC.

Proto-writing was The 1st. form of comunication after speaking.Tortoise shells were found in 24 Neolithic graves excavated at Jiahu, Henan province, northern China, with radiocarbon dates from the 7th millennium BC. According to some archaeologists, the symbols carved on the shells had similarities to the late 2nd millennium BC oracle bone script.The Vinča signs, found during excavations in Vinča, a suburb of Belgrade (Serbia), an evolution of simple symbols beginning in the 7th millennium BC, gradually increasing in complexity throughout the 6th millennium and culminating in the Tărtăria tablets of ca. 5300 BC with their rows of symbols carefully aligned, evoking the impression of a “text”.The Dispilio Tablet of the late 6th millennium is similar. The hieroglyphic scripts of the Ancient Near East seamlessly emerge from such symbol systems, so that it is difficult to say at what point precisely writing emerges from proto-writing. Adding to this difficulty is the fact that very little is known about the symbols’ meanings.

Piktograf1

Join Amazon Prime – Watch Over 40,000 Movies & TV Shows Anytime – Start Free Trial Now

The transition from proto-writing to the earliest fully developed writing systems took place in the late 4th to early 3rd millennium BC in the Fertile Crescent. The Kish tablet, dated to 3500 BC, reflects the stage of “proto-cuneiform”, when what would become the cuneiform script of Sumer was still in the proto-writing stage. By the end of the 4th millennium BC, this symbol system had evolved into a method of keeping accounts, using a round-shaped stylus impressed into soft clay at different angles for recording numbers. This was gradually augmented with pictographic writing using a sharp stylus to indicate what was being counted. The transitional stage to a writing system proper takes place in the Jemdet Nasr period (31st to 30th centuries BC). A similar development took place in the genesis of the Egyptian hieroglyphs. Various scholars believe that Egyptian hieroglyphs “came into existence a little after Sumerian script, and invented under the influence of the latter although it is pointed out a development of writing in Egypt
the Bronze Age, the cultures of the Ancient Near East had fully developed writing systems, while the marginal territories affected by the Bronze Age, viz. Europe, India and China, remained in the stage of proto-writing.

Sumerian_26th_c_Adab
The Chinese script emerges from proto-writing in the Chinese Bronze Age, during about the 14th to 11th centuries BC (Oracle bone script), while symbol systems native to Europe and India are extinct and replaced by descendants of the Semitic abjad during the Iron Age.
Typical “Indus script” seal impression showing an “inscription” of five characters.
The so-called Indus script is a symbol system used during the 3rd millennium BC in the Indus Valley Civilization.
With the exception of the Aegean, the early writing systems of the Near East did not reach Bronze Age Europe. The earliest writing systems of Europe arise in the Iron Age, derived from the Phoenician alphabet.

779px-Caslon-schriftmusterblatt
The “Slavic runes” (7th/8th century) mentioned by a few medieval authors may have been such a system. The Quipu of the Incas (15th century), sometimes called “talking knots”, may have been of a similar nature. Another example is the system of pictographs invented by Uyaquk before the development of the Yugtun syllabary (ca. 1900).
Nsibidi is a system of symbols indigenous to what is now southeastern Nigeria. While there remains no commonly accepted exact date of origin, most researchers agree that use of the symbols date back well before 500 CE. There are thousands of Nsibidi symbols which were used on anything from calabashes to tattoos and to wall designs. Nsibidi is used for the Ekoid and Igboid languages, and the Aro people are known to write Nsibidi messages on the bodies of their messengers.

Storytelling is the conveying of events in words, and images, often by improvisation or embellishment. Stories or narratives have been shared in every culture as a means of entertainment, education, cultural preservation, and instilling moral values. Crucial elements of stories and storytelling include plot, characters, and narrative point of view. Storytelling predates writing, with the earliest forms of storytelling usually oral combined with gestures and expressions. In addition to being part of religious ritual, rock art may have served as a form of storytelling for many ancient cultures. The Australian aboriginal people painted symbols from stories on cave walls as a means of helping the storyteller remember the story. The story was then told using a combination of oral narrative, music, rock art, and dance, which bring understanding and meaning of human existence through remembrance and enactment of stories. People have used the carved trunks of living trees and ephemeral media to record stories in pictures or with writing. Complex forms of tattooing may also represent stories, with information about genealogy, affiliation, and social status.
With the advent of writing and the use of, portable media, stories were recorded, transcribed, and shared over wide regions of the world. Stories have been carved, scratched, painted, printed or inked onto wood or bamboo, ivory and other bones, pottery, clay tablets, stone, palm-leaf books, skins, bark cloth, paper, silk, canvas, and other textiles, recorded on film, and stored electronically in digital form. Oral stories continue to be committed to memory and passed from generation to generation, despite the increasing popularity of written and televised media in much of the world.

Eliments of a story
1 Plot is a literary term defined as the events that make up a story, as they relate to one another in a pattern, in a sequence, through cause and effect, One is generally interested in how well this pattern of events accomplishes some artistic or emotional effect. A complicated plot is called an imbroglio, but even the simplest statements of plot may include multiple inferences, as in traditional ballads.In other words, a plot is the gist of a story, and composed of causal events, which means a series of sentences linked by “and so.” A plot highlights all the important points and the line of a story.
2 a character requires an analysis of its relations with all of the other characters in the work. The individual status of a character is defined through the network of oppositions (proairetic, pragmatic, linguistic, proxemic) that it forms with the other characters.The relation between characters and the action of the story shifts historically, often miming shifts in society and its ideas about human individuality, self-determination, and the social order.
3 A narrator is either a personal character or a non-personal voice or images created by the author to deliver information to the audience about the plot and/or other information. something that merely relates the story to the audience without being involved in the actual events. Some stories have multiple narrators to illustrate the story-lines of various characters at the same, similar, or different times, thus allowing a more complex, non-singular point of view.
4 medium or Media are the collective communication outlets or tools that are used to store and deliver information or data.It is either associated with communication media, or the specialized communication businesses such as: print media and the press, photography, advertising, cinema, broadcasting (radio,television or the internet), and/or publishing.

Types of storys
Fiction is the form of any narrative that deals, in part or in whole, with information or events that are not real, but rather, imaginary—that is, invented by the author. Although the term fiction refers in particular to written stories such as novels and short stories, it may also refer to the theatre , film, television, poetry and song. Fiction contrasts with non-fiction, which deals exclusively with factual or, at least, assumed factual events, descriptions, observations.
Non-fiction, is a narrative that strictly presents presumably real-life events, established facts, and true information. The authors of such accounts believe them to be truthful at the time of their composition or, at least, pose them to a convinced audience as historically or empirically true. Reporting the beliefs of others in a non-fiction format is not necessarily an endorsement of the ultimate veracity of those beliefs, it is simply saying it is true that people believe them Non-fiction can also be written about fiction, giving information about these other works. Non-fiction need not necessarily be written text, since pictures and film can also purport to present a factual account of a subject.
Traditional stories, or stories about traditions, differ from both fiction and nonfiction in that the importance of transmitting the story’s worldview is generally understood to transcend an immediate need to establish its categorization as imaginary or factual. In the academic circles of literature, religion, history, and anthropology, categories of traditional story are important terminology to identify and interpret stories more precisely. Some stories belong in multiple categories and some stories do not fit into any category.
A fairy tale typically features European folkloric fantasy characters, such as dwarves, elves, fairies, giants, gnomes, goblins, mermaids, trolls, or witches, and usually magic or enchantments. Fairy tales may be distinguished from other folk narratives such as legends and explicitly moral tales, including beast fables.
the term is also used to describe something blessed with unusual happiness, as in “fairy tale ending” or “fairy tale romance” . Colloquially, a “fairy tale” or “fairy story” can also mean any farfetched story or tall tale; it is used especially of any story that not only is not true, but could not possibly be true. Legends are perceived as real; fairy tales may merge into legends, where the narrative is perceived both by teller and hearers as being grounded in historical truth. However, unlike legends and epics, they usually do not contain more than superficial references to religion and actual places, people, and events; they take place once upon a time rather than in actual times.
Folklore consists of legends, music, oral history, proverbs, jokes, popular beliefs, fairy tales, stories, tall tales, and customs included in the traditions of a culture, subculture, or group. It also includes the set of practices through which those expressive genres are shared.
Mythology can refer either to the collected myths of a group of people—their body of stories which they tell to explain nature, history, and customs—or to the study of such myths.As a collection of such stories, mythology is an important feature of every culture. Various origins for myths have been proposed, ranging from personification of natural phenomena to truthful or hyperbolic accounts of historical events, to explanations of existing ritual. Although the term is complicated by its implicit condescension, mythologizing is not just an ancient or primitive practice, as shown by contemporary mythopoeia such as urban legends and the expansive fictional mythoi created by fantasy novels and Japanese manga. A culture’s collective mythology helps convey belonging, shared and religious experience, behavioral models, and moral and practical lessons.
A legend, “things to be read” is a narrative of human actions that are perceived both by teller and listeners to take place within human history and to possess certain qualities that give the tale verisimilitude. Legend, for its active and passive participants includes no happenings that are outside the realm of “possibility”, as that is defined by a highly flexible set of parameters, which may include miracles that are perceived as actually having happened within the specific tradition of indoctrination where the legend arises, and within which tradition it may be transformed over time, in order to keep it fresh and vital, and realistic. Many legends operate within the realm of uncertainty, never being entirely believed by the participants, but also never being resolutely doubted.
Fable is a literary genre. A fable is a succinct fictional story, in prose or verse, that features animals, mythical creatures, plants, inanimate objects or forces of nature which are anthropomorphized (given human qualities such as verbal communication), and that illustrates or leads to an interpretation of a moral lesson (a “moral”), which may at the end be added explicitly in a pithy maxim.

Write or Draw to start your creation …
A storyboard is a graphic organizer in the form of illustrations or images displayed in sequence for the purpose of pre-visualizing a motion picture, animation, motion graphic or interactive media sequence. The storyboarding process, in the form it is known today, was developed at Walt Disney Productions during the early 1930s, after several years of similar processes being in use at Walt Disney and other animation studios.

A plot outline Points
•The teaser. This is a scene that pulls the reader in, preferably an action scene.
•Exposition/Background. Where is the setting? Who are the characters? This tells necessary information in order to follow along with the story.
•The conflict. Character(s) presented with a problem.
•Rising Action. The suspense grows, and the problems take the Ripple Effect into new problems, which, in turn, cause conflict for your character.
•Suspense. Right before the climactic scene. These are the events that lead up to the climax, which are crucial to make the story flow.
•Climax. Here is the scene where all of the problems blow up in one event, where your character is in the worst trouble. This is usually only a single event.
•Winding Down. Your character recovers from the incident in the climax, and things smooth out slightly. There are still problems but your character has recovered.
•Falling Action. All of the problems are untied, things settle in, and your character feels back to normal but usually impacted from the events that occurred.
•Resolution. A scene like an epilogue, that tells what your character is going through or will be going through in the future, and how they feel.
•End teaser (for series writers). Just like the teaser, but makes the reader want to read the next novel.
Fill in each plot point, and from there you are good.


New Fun of “The Digital Age”is you can just shoot and form the story in edditing or use Documentry style documentation to present numerious mediums usefull in art or education.

Perfect, BackSeams…”HowTo”

IMG_20140927_045303

Seamed: Stockings manufactured in the old Full-Fashioned manner with a seam running up the back of the leg. In the past they were manufactured by cutting the fabric and then sewing it together. Today stockings are generally fully knitted and a fake or mock seam is added up the back for a particular fashion look. Some brands also produce seamed hold-ups.
images-21

Hosiery, also referred to as legwear, describes garments worn directly on the feet and legs. The term originated as the collective term for products of which a maker or seller is termed a hosier; and those products are also known generically as hose. The term is also used for all types of knitted fabric, and its thickness and weight is defined in terms of denier or opacity. Lower denier measurements of 5 to 15 describe a hose which may be sheer in appearance, whereas styles of 40 and above are dense, with little to no light able to come through on 100 denier items.

The first references to hosiery can be found in works of Hesiod, where Romans are said to have used leather or cloth in forms of strips to cover their lower body parts. Even the Egyptians are speculated to have used hosiery as socks have been found in certain tombs.

images-22
Before the 1920s, women’s stockings, if worn, were worn for warmth. In the 1920s, as hemlines of women’s dresses rose, women began to wear stockings to cover the exposed legs. These stockings were sheer, first made of silk or rayon (then known as “artificial silk”), and after 1940 of nylon.
images-32

images-33

images-34

images-35

Paint-on Hosiery During the War Years

A back “seam” drawn with an eyebrow pencil topped off the resourceful fashion effect
So it’s Saturday night in 1941, and you want to wear stockings with your cocktail dress, but the new wonder material nylon has been rationed for the war effort and has disappeared from department store shelves. What do you do in such times of patriotic privation? You get resourceful, and cover your legs with a layer of nude-colored makeup, and line the back of each leg with a trompe l’oeil seam.

Last week, in the first post from the Stocking Series, we heard about the huge reception of nylon hosiery. On May 16, 1940, officially called “Nylon Day,” four million pairs of nylons landed in stores and sold out within two days! But only a year later, the revolutionary product became scarce when the World War II economy directed all nylon into manufacturing parachutes, rope and netting.
Having trouble with your seam? No problem! This contraption, made from a screwdriver handle, bicycle leg clip and an ordinary eyebrow pencil would do the trick!

images-36

images-38
images-42


images-52

 

The Mary Jane Style HowTo-Ep#18 Might seem silly Topic is june13,1934


YouTube Channel please subscribe !!!

The Best preparation of The Future is knowledge of The Past

Topic is June13,1934 Production Code followed by any filmmaker who would want Theatrical distribution = censorship Is there a separation of church and state… what was artistic freedom from 1934-1968… At the end of the video I added a little funny impression of not all but, a sample of possible working environment very different from Todays
The last decade or more there has been minimum censorship at least here on the internet I choose this topic due to questions or concerns on how is the internet going to be censored or monitored… this video is simply the reading of The …
The Motion Picture Production Code was the set of industry moral censorship guidelines that governed the production of most United States motion pictures released by major studios from 1930 to 1968.
It is also popularly known as the Hays Code, after Hollywood’s chief censor of the time, Will H. Hays. The Motion Pictures Producers and Distributors of America (MPPDA), which later became the Motion Picture Association of America (MPAA), adopted the code in 1930, began enforcing it in 1934, and abandoned it in 1968, in favor of the subsequent MPAA film rating system.
The Production Code spelled out what was acceptable and what was unacceptable content for motion pictures produced for a public audience in the United States. The office enforcing it was popularly called the Hays Office in reference to Hays, inaccurately so after 1934 when Joseph Breen took over from Hays, creating the Breen Office, which was far more rigid in censoring films than Hays had been.
Where Did this idea come from?  In 1922, after several risqué films and a series of off-screen scandals involving Hollywood stars, the studios enlisted Presbyterian elder Will H. Hays to rehabilitate Hollywood’s image. Hollywood in the 1920s was expected to be somewhat corrupt, and many felt the movie industry had always been morally questionable. Political pressure was increasing, with legislators in 37 states introducing almost one hundred movie censorship bills in 1921. Hays was paid the then-lavish sum of $100,000 a year. Hays, Postmaster General under Warren G. Harding and former head of the Republican National Committee, served for 25 years as president of the Motion Picture Producers and Distributors of America (MPPDA), where he “defended the industry from attacks, recited soothing nostrums, and negotiated treaties to cease hostilities.”
MPPDA2b
The move mimicked the decision Major League Baseball had made in hiring judge Kennesaw Mountain Landis as League Commissioner the previous year to quell questions about the integrity of baseball in the wake of the 1919 World Series gambling scandal; The New York Times even called Hays the “screen Landis”.
In 1924, Hays introduced a set of recommendations dubbed “The Formula” which the studios were advised to heed, and asked filmmakers to describe to his office the plots of pictures they were planning on making. The Supreme Court had already decided unanimously in 1915 in Mutual Film Corporation v. Industrial Commission of Ohio that free speech did not extend to motion pictures, and while there had been token attempts to clean up the movies before—such as when the studios formed the National Association of the Motion Picture Industry (NAMPI) in 1916—little had come of the efforts.

New York became the first state to take advantage of the Supreme Court’s decision by instituting a censorship board in 1921. Virginia followed suit the following year, with eight individual states having a board by the advent of sound film, But many of these were ineffectual. By the 1920s, the New York stage—a frequent source of subsequent screen material—had topless shows, performances filled with curse words, mature subject matters, and sexually suggestive dialogue. Early in the sound system conversion process, it became apparent that what might be acceptable in New York would not be so in Kansas.

In 1927, Hays suggested to studio executives that they form a committee to discuss film censorship. Irving G. Thalberg of Metro Goldwyn Mayer (MGM), Sol Wetzel of Fox, and E. H. Allen of Paramount responded by collaborating on a list they called the “Don’ts and Be careful”, which was based on items that were challenged by local censor boards. This list consisted of eleven subjects best avoided and twenty-six to be handled very carefully. The list was approved by the Federal Trade Commission (FTC), and Hays created the Studio Relations Committee (SRC) to oversee its implementation. However, there was still no way to enforce tenets. The controversy surrounding film standards came to a head in 1929

In 1929, the lay Catholic Martin Quigley (editor of the prominent trade paper Motion Picture Herald) and the Jesuit priest Father Daniel A. Lord created a code of standards and submitted it to the studios. The Lord was particularly concerned with the effects of sound film on children, whom he considered especially susceptible to their allure. In February 1930, several studio heads—including Irving Thalberg of Metro-Goldwyn-Mayer (MGM)—met with Lord and Quigley. After some revisions, they agreed to the stipulations of the Code. One of the main motivating factors in adopting the Code was to avoid direct government intervention. It was the responsibility of the SRC (headed by Colonel Jason S. Joy, a former American Red Cross Executive Secretary) to supervise film production and advise the studios when changes or cuts were required. On March 31, the MPPDA agreed that it would abide by the Code.

imagesBHVPW0M5 IMG_5487
imagesUGO886ED
The code was divided into two parts. The first was a set of “general principles” which mostly concerned morality.
The second was a set of “particular applications” which was an exacting list of items which could not be depicted. Some restrictions, such as the ban on homosexuality or on the use of specific curse words, were never directly mentioned, but were assumed to be understood without clear demarcation. Depiction of miscegenation (i.e. marital or sexual relations between different races) was forbidden. It also stated that the notion of an “adults-only policy” would be a dubious, ineffective strategy which would be difficult to enforce. However, it did allow that “maturer minds may easily understand and accept without harm subject matter in plots which does younger people positive harm.” If children were supervised and the events implied elliptically, the code allowed “the possibility of a cinematically inspired thought crime.”

The production code sought not only to determine what could be portrayed on screen but also to promote traditional values. Sexual relations outside of marriage—which were forbidden from being portrayed as attractive or beautiful—were to be presented in a way that would not arouse passion or make them seem permissible.
All criminal action had to be punished, and neither the crime nor the criminal could elicit sympathy from the audience, or the audience must at least be aware that such behavior is wrong, usually through “compensating moral value”.
Authority figures had to be treated with respect, and the clergy could not be portrayed as comic characters or villains. Under some circumstances, politicians, police officers, and judges could be villains, as long as it was clear that those individuals portrayed as villains were the exceptions to the rule.

The entire document was written with Catholic undertones and stated that art must be handled carefully because it could be “morally evil in its effects” and because its “deep moral significance” was unquestionable. It was initially decided to keep the Catholic influence on the Code secret. A recurring theme was “that throughout, the audience feels sure that evil is wrong and good is right”. The Code also contained an addendum commonly referred to as the Advertising Code which regulated advertising copy and imagery.
The first film the office reviewed, The Blue Angel, which was passed by Joy with no revisions, was considered indecent by a California censor. Although there were several instances where Joy negotiated cuts from films and there were indeed definite—albeit loose—constraints, a significant amount of lurid material made it to the screen. Joy had to review 500 films a year with a small staff and little power. He was more willing to work with the studios, and his creative writing skills led to his hiring at Fox. On the other hand, Wingate struggled to keep up with the flood of scripts coming in, to the point where Warner Bro.’s head of production Darryl Zanuck wrote him a letter imploring him to pick up the pace.
In 1930, the Hays office did not have the authority to order studios to remove material from a film, and instead worked by reasoning and sometimes pleading with them. Complicating matters, the appeals process ultimately put the responsibility for making the final decision in the hands of the studios.

One factor in ignoring the code was the fact that some found such censorship prudish, due to the libertine social attitudes of the 1920s and early 1930s. This was a period in which the Victorian era was sometimes ridiculed as being naïve and backward. When the Code was announced, liberal periodical The Nation attacked it.

The publication stated that if crime were never to be presented in a sympathetic light, then taken literally that would mean that “law” and “justice” would become one and the same. Therefore, events such as the Boston Tea Party could not be portrayed. If clergy must always be presented in a positive way, then hypocrisy could not be dealt with either. The Outlook agreed, and, unlike Variety, The Outlook predicted from the beginning that the Code would be difficult to enforce. The Great Depression of the 1930s led many studios to seek income by any way possible. Since films containing racy and violent content resulted in high ticket sales, it seemed reasonable to continue producing such films. Soon, the flouting of the code became an open secret. In 1931, the Hollywood Reporter mocked the code and Variety followed suit in 1933. In the same year as the Variety article, a noted screenwriter stated that “the Hays moral code is not even a joke any more; it’s just a memory.”

On June 13, 1934, an amendment to the Code was adopted which established the Production Code Administration (PCA) and required all films released on or after July 1, 1934, to obtain a certificate of approval before being released. The PCA had two offices—one in Hollywood and the other in New York City. The first film to receive an MPPDA seal of approval was The World Moves On. For more than thirty years, virtually all motion pictures produced in the United States adhered to the code. The Production Code was not created or enforced by federal, state, or city government; the Hollywood studios adopted the code in large part in the hopes of avoiding government censorship, preferring self-regulation to government regulation. The enforcement of the Production Code led to the dissolution of many local censorship boards.

 

y untitled sec 1934

Hollywood worked within the confines of the Production Code until the late 1950s and the movies were faced with very serious competitive threats. The first threat came from a new technology, television, which did not require Americans to leave their house to watch moving pictures. Hollywood needed to offer the public something it could not get on television, which itself was under an even more restrictive censorship code.

In addition to the threat of television, there was also increasing competition from foreign films, such as
Vittorio De Sica’s Bicycle Thieves (1948),
the Swedish film One Summer of Happiness (1951),
and Ingmar Bergman’s Summer with Monika (1953).
Vertical integration in the movie industry had been found to violate anti-trust laws, and studios had been forced to give up ownership of theatres by the Supreme Court in United States v. Paramount Pictures, Inc. (1948). The studios had no way to keep foreign films out, and foreign films were not bound by the Production Code. (For De Sica’s film, there was a censorship controversy when the MPAA demanded a scene where the lead characters talk to the prostitutes of a brothel be removed, regardless of the fact that there was no sexual or provocative activity.) Some British films—such as
Victim (1961),
A Taste of Honey (1961),
and The Leather Boys (1963)
—challenged traditional gender roles and openly confronted the prejudices against homosexuals, all in clear violation of the Hollywood Production Code. In keeping with the changes in society, sexual content that would have previously been banned by the Code was being retained. The anti-trust rulings also helped pave the way for independent art houses that would show films created by people such as Andy Warhol who worked outside the studio system.

In 1952, in the case of Joseph Burstyn, Inc. v. Wilson, the U.S. Supreme Court unanimously overruled its 1915 decision
(Mutual Film Corporation v. Industrial Commission of Ohio) and held that motion pictures were entitled to First Amendment protection, so that the New York State Board of Regents could not ban
“The Miracle”, a short film that was one half of L’Amore (1948),
an anthology film directed by Roberto Rossellini. Film distributor Joseph Burstyn released the film in the U.S. in 1950, and the case became known as the “Miracle Decision” due to its connection to Rossellini’s film. That reduced the threat of government regulation, which had formerly been cited as justification for the Production Code, and the PCA’s powers over the Hollywood industry were greatly reduced.

By the 1950s, American culture also began to change. A boycott by the National Legion of Decency no longer guaranteed a film’s commercial failure, and several aspects of the code had slowly lost their taboo. In 1956, areas of the code were rewritten to accept subjects such as miscegenation, adultery, and prostitution. For example, the remake of a pre-Code film dealing with prostitution, Anna Christie, was cancelled by MGM twice, in 1940 and in 1946, as the character of Anna was not allowed to be portrayed as a prostitute. By 1962, such subject matter was acceptable and the original film was given a seal of approval.

By the late 1950s, increasingly explicit films began to appear, such as
Anatomy of a Murder (1959),
Suddenly Last Summer (1959),
and The Dark at the Top of the Stairs (1961).
The MPAA reluctantly granted the seal of approval for these films, although not until certain cuts were made. Due to its themes, Billy Wilder’s
Some Like It Hot (1959)
was not granted a certificate of approval, but it still became a box office smash, and, as a result, it further weakened the authority of the Code. At the forefront of contesting the Code was director Otto Preminger, whose films violated the Code repeatedly in the 1950s.
His 1953 film The Moon Is Blue — about a young woman who tries to play two suitors off against each other by claiming that she plans to keep her virginity until marriage — was released without a certificate of approval. He later made
The Man with the Golden Arm (1955),
which portrayed the prohibited subject of drug abuse, and
Anatomy of a Murder (1959), which dealt with murder and rape.
Like Some Like It Hot, Preminger’s films were direct assaults on the authority of the Production Code, and their success hastened its abandonment. In the early 1960s, films began to deal with adult subjects and sexual matters that had not been seen in Hollywood films since the early 1930s. The MPAA reluctantly granted the seal of approval for these films, although again not until certain cuts were made.

In 1964, the Holocaust film The Pawnbroker, directed by Sidney Lumet and starring Rod Steiger, was initially rejected because of two scenes in which the actresses Linda Geiser and Thelma Oliver fully expose their breasts, as well as due to a sex scene between Oliver and Jaime Sánchez described as “unacceptably sex suggestive and lustful”. Despite the rejection, the film’s producers arranged for Allied Artists to release the film without the Production Code seal, with the New York censors licensing the film without the cuts demanded by Code administrators. The producers appealed the rejection to the Motion Picture Association of America.
On a 6–3 vote, the MPAA granted the film an exception conditional on “reduction in the length of the scenes which the Production Code Administration found unprovable.” The requested reductions of nudity were minimal; the outcome was viewed in the media as a victory for the film’s producers.
The Pawnbroker
was the first film featuring bare breasts to receive Production Code approval. The exception to the code was granted as a “special and unique case” and was described by The New York Times at the time as “an unprecedented move that will not, however, set a precedent”. However, in Pictures at a Revolution, Mark Harris’ 2008 study of films during that era, Harris wrote that the MPAA approval was “the first of a series of injuries to the Production Code that would prove fatal within three years.”

In 1966, Warner Bros. released Who’s Afraid of Virginia Woolf?, the first film to feature the “Suggested for Mature Audiences” (SMA) label. When Jack Valenti became President of the MPAA in 1966, he was faced with censoring the film’s explicit language. Valenti negotiated a compromise: the word “screw” was removed, but other language remained, including the phrase “hump the hostess”. The film received Production Code approval despite the previously prohibited language.

That same year, the British-produced, American-financed film Blowup was denied Production Code approval. MGM released it anyway, the first instance of an MPAA member company distributing a film that did not have an approval certificate. That same year, the original and lengthy code was replaced by a list of eleven points. The points outlined that the boundaries of the new code would be current community standards and good taste. In addition, any film containing content deemed to be suitable for older audiences would feature the label SMA in its advertising. With the creation of this new label, the MPAA unofficially began classifying films.
By the late 1960s, enforcement had become impossible and the Production Code was abandoned entirely. The MPAA began working on a rating system, under which film restrictions would lessen. The MPAA film rating system went into effect on November 1, 1968, with four ratings: G for general audiences, M for mature content, R for restricted (under 17 not admitted without an adult), and X for sexually explicit content. By the end of 1968, Geoffrey Shurlock stepped down from his post.[50][51] In 1969, the Swedish film I Am Curious (Yellow), directed by Vilgot Sjöman, was initially banned in the U.S. for its frank depiction of sexuality; however, this was overturned by the Supreme Court.

In 1970, because of confusion over the meaning of “mature audiences”, the M rating was changed to GP, and then in 1972 to the current PG, for “parental guidance suggested”. In 1984, in response to public complaints regarding the severity of horror elements in PG-rated titles such as Gremlins and Indiana Jones and the Temple of Doom, the PG-13 rating was created as a middle tier between PG and R. In 1990, the X rating was replaced by NC-17 (under 17 not admitted), partly because of the stigma associated with the X rating, and partly because the X rating was not trademarked by the MPAA; pornographic bookstores and theaters were using their own X and XXX symbols to market products.


tmjs e 18






HowTo Apply Vintage Nylons, Stockings or Fine Hosiery






The Mary Jane Style Today, Dressing in classic vintage pin up attire “HowTo” Apply Authentic stockings warn from Picturesque* Designer vintage nylons sheer with nude back seam ,keyhole top and dainty black ankle design with jewels there is quite a History in Hosery.Dress your legs elegantly so you are always dressed to kill
First a must is…
A garter belt, or suspender belt or suspenders, is the most common way of holding up stockings. It is a piece of lingerie worn around the waist like a belt which has “suspenders” or “stays” that clip to the tops of the stockings to hold them in place.
The History of the very Nylons you see in this tutorial…
The name of the new discovery, nylon, came from DuPont entering it in the New York World Fair in 1939. Ny(-lon) is the abbreviation for New York. The publicity was a hit and the basic products were advertised
Stockings were typically silk and pricey. When nylon stockings became available (May 15, 1940), DuPont sold nearly 800,000 the first day. By the end of the first year, 64 million stockings had been sold. They were still being produced the same as the silk stockings–”fully-fashioned” with hand-sewn backs.
A stocking frame was a mechanical knitting machine used in the textiles industry. It was invented by William Lee of Calverton near Nottingham in 1589. Its use, known traditionally as framework knitting, was the first major stage in the mechanisation of the textile industry, and played an important part in the early history of the Industrial Revolution.
By 1598 he was able to knit stockings from silk,A thriving business built up with the exiled Huguenot silk-spinners who had settled in the village of Spitalfields just outside the city. In 1663, the London Company of Framework Knitters was granted a charter. By about 1785, however, demand was rising for cheaper stockings made of cotton. The frame was adapted but became too expensive for individuals to buy, thus wealthy men bought the machines and hired them out to the knitters, providing the materials and buying the finished product. With increasing competition, they ignored the standards set by the Chartered Company. Frames were introduced to Leicester by Nicholas Alsop in around 1680, who encountered resistance and at first worked secretly in a cellar in Northgate Street, taking his own sons and the children of near relatives as apprentices.
In 1728, the Nottingham magistrates refused to accept the authority of the London Company, and the centre of the trade moved northwards to Nottingham, which also had a lace making industry.
The breakthrough with cotton stockings, however, came in 1758 when Jedediah Strutt introduced an attachment for the frame which produced what became known as the “Derby Rib”. The Nottingham frameworkers found themselves increasingly short of raw materials. Initially they used thread spun in India, but this was expensive and required doubling. Lancashire yarn was spun for fustian and varied in texture. They tried spinning cotton themselves but, being used to the long fibres of wool, experienced great difficulty. Meanwhile, the Gloucester spinners, who had been used to a much shorter wool, were able to handle cotton and their frameworkers were competing with the Nottingham producers.
Before the 1920s, women’s stockings, if worn, were worn for warmth. In the 1920s, as hemlines of women’s dresses rose, women began to wear stockings to cover the exposed legs. These stockings were sheer, first made of silk or rayon (then known as “artificial silk”),
In modern usage, stocking specifically refers to the form of women’s hosiery configured as two pieces, one for each leg (except for American and Australian English, where the term can also be a synonym pantyhose). The term hold-ups and thigh highs refers to stockings that stay up by the use of built-in elastic, while the word stockings is the general term or refers to the kind of stockings that need a suspender belt (garter belt, in American English), and are quite distinct from tights or pantyhose (American English).
dress up vint ny play D

Hosiery Style Definitions
Cuban heel: A stocking with a heel made with folded over and sewn reinforcement.

Demi-toe: Stockings which have a reinforced toe with half the coverage on top as on the bottom. This results in a reinforcement that covers only the tip of the toes as opposed to the whole toe. These can be with or without a reinforced heel.

Denier: The lower the denier number the sheerer the garment. Stockings knitted with a higher denier tend to be less sheer but more durable.

Fishnet: Knitted stockings with a very wide open knit resembling a fish net.

Fencenet: Similar to fishnet, but with a much wider pattern. These are sometimes worn over another pair of stockings or pantyhose, such as matte or opaque, with a contrasting colour. Sometimes referred to as whalenets.

Full Fashioned: Fully fashioned stockings are knitted flat, the material is then cut and the two sides are then united by a seam up the back. Fully fashioned stockings were the most popular style until the 1960s.

Hold-ups (British English) or Stay-ups: Stockings that are held up by sewn-in elasticated bands (quite often a wide lace top band). In the US they are referred to as thigh-highs.

Knee-Highs: Stockings that terminate at or just barely below the knee. Also known as half-stockings, trouser socks, or socks.

Matte: Stockings which have a dull or non-lustre finish.

Mock seam: A false seam sewn into the back of a seamless stocking.

Nude heel: Stockings without reinforcement in the heel area.

Opaque: Stockings made of yarn which give them a heavier appearance (usually 40 denier or greater).

Point heel: in a Fully Fashioned stocking it is a heel in which the reinforced part ends in a triangle shape.

RHT: Abbreviation of reinforced heel and toe.

Open-toed: Stockings that stop at the base of the toe with a piece that goes between the first and second toes to hold them down. They can be worn with some open-toed shoes, especially to show off pedicured toes.

Sandalfoot: Stockings with a nude toe, meaning no heavier yarn in the toe than is in the leg. They are conceived to be worn with sandal or open-toe shoes.

Seamed: Stockings manufactured in the old Full-Fashioned manner with a seam running up the back of the leg. In the past they were manufactured by cutting the fabric and then sewing it together. Today stockings are generally fully knitted and a fake or mock seam is added up the back for a particular fashion look. Some brands also produce seamed hold-ups.

Seamless: Stockings knit in one operation on circular machines (one continuous operation) so that no seaming is required up the back.

Sheers: Stockings generally of a 15 to 20 denier.

Thigh-Highs: Stockings that terminate somewhere in the mid-thigh.

Garter’s
Suspender belt (British English) or Garter belt (American Eglish): a belt with straps to keep stockings (not hold-ups) on place: usually they have 4 straps, but may have also 6 or 8.
Ultra Sheer: A fine denier fiber which gives the ultimate in sheerness. Usually 10 denier.
Welt[disambiguation needed]: A fabric knitted separately and machine-sewn to the top of a stocking. Knit in a heavier denier yarn and folded double to give strength for supporter fastening.

// ]]>nylons o
These Jewels vintage stockings in nude from Picturesque
are a great take on the classic stocking featuring an erray of heat fixed white luxury crystals on 15 denier nylon.
These nylons are a silky 100% Nylon and 60 Gauge.
Non stretching original form fitting, First Quality. Ultra Sexy !
They are old/new stock and wrapped in original tissue paper and comes with there original Picturesque Box.
A great item to wear or collect ! Made in USA by Sanson Hosiery Mills Inc.
These fabulously sexy nylon stockings are new/old stock and unworn.
The Nylon Stockings are in mint condition.
The Box have age marks and are not 100% perfect.
These nylon stockings are circa 40 – 50 years old and very extremely rare.