The Feel Of Nior …

Film noir (/fɪlm nwɑːr/; French pronunciation: ​[film nwaʁ]) is a cinematic term used primarily to describe stylish Hollywood crime dramas, particularly such that emphasize cynical attitudes and sexual motivations. Hollywood’s classical film noir period is generally regarded as extending from the early 1940s to the late 1950s. Film noir of this era is associated with a low-key black-and-white visual style that has roots in German Expressionist cinematography. Many of the prototypical stories and much of the attitude of classic noir derive from the hardboiled school of crime fiction that emerged in the United States during the Great Depression.

The term film noir, French for “black film” (literal) or “dark film” (closer meaning), first applied to Hollywood films by French critic Nino Frank in 1946, was unrecognized by most American film industry professionals of that era. Cinema historians and critics defined the category retrospectively. Before the notion was widely adopted in the 1970s, many of the classic films noirs, were referred to as melodramas. Whether film noir qualifies as a distinct genre is a matter of ongoing debate among scholars.

Film noir encompasses a range of plots: the central figure may be a private eye (The Big Sleep), a plainclothes policeman (The Big Heat), an aging boxer (The Set-Up), a hapless grifter (Night and the City), a law-abiding citizen lured into a life of crime (Gun Crazy), or simply a victim of circumstance (D.O.A.). Although film noir was originally associated with American productions, films now so described have been made around the world. Many pictures released from the 1960s onward share attributes with film noir of the classical period, and often treat its conventions self-referentially. Some refer to such latter-day works as neo-noir.

The questions of what defines film noir, and what sort of category it is, provoke continuing debate. “We’d be oversimplifying things in calling film noir oneiric, strange, erotic, ambivalent, and cruel” : this set of attributes constitutes the first of many attempts to define film noir made by French critics Raymond Borde and Etienne Chaumeton in their 1955 book Panorama du film noir américain 1941–1953 (A Panorama of American Film Noir), the original and seminal extended treatment of the subject. They emphasize that not every film noir embodies all five attributes in equal measure—one might be more dreamlike; another, particularly brutal. The authors’ caveats and repeated efforts at alternative definition have been echoed in subsequent scholarship: in the more than five decades since, there have been innumerable further attempts at definition, yet in the words of cinema historian Mark Bould, film noir remains an “elusive phenomenon … always just out of reach”.

Though film noir is often identified with a visual style, unconventional within a Hollywood context, that emphasizes low-key lighting and unbalanced compositions, films commonly identified as noir evidence a variety of visual approaches, including ones that fit comfortably within the Hollywood mainstream. Film noir similarly embraces a variety of genres, from the gangster film to the police procedural to the gothic romance to the social problem picture—any example of which from the 1940s and 1950s, now seen as noir’s classical era, was likely to be described as a “melodrama” at the time.

While many critics refer to film noir as a genre itself, others argue that it can be no such thing. While noir is often associated with an urban setting, many classic noirs take place in small towns, suburbia, rural areas, or on the open road; so setting cannot be its genre determinant, as with the Western. Similarly, while the private eye and the femme fatale are character types conventionally identified with noir, the majority of film noirs feature neither; so there is no character basis for genre designation as with the gangster film. Nor does film noir rely on anything as evident as the monstrous or supernatural elements of the horror film, the speculative leaps of the science fiction film, or the song-and-dance routines of the musical.

A more analogous case is that of the screwball comedy, widely accepted by film historians as constituting a “genre”: the screwball is defined not by a fundamental attribute, but by a general disposition and a group of elements, some—but rarely and perhaps never all—of which are found in each of the genre’s films. However, because of the diversity of noir (much greater than that of the screwball comedy), certain scholars in the field, such as film historian Thomas Schatz, treat it as not a genre but a “style”. Alain Silver, the most widely published American critic specializing in film noir studies, refers to film noir as a “cycle” and a “phenomenon”,even as he argues that it has—like certain genres—a consistent set of visual and thematic codes. Other critics treat film noir as a “mood”, characterize it as a “series”, or simply address a chosen set of films they regard as belonging to the noir “canon”. There is no consensus on the matter.

DONATIONS FOR THEMARYJANESTYLE PRODUCTIONS…Includes Supporting MaryJane’s ARTS & Comedy





Film noir’s aesthetics are deeply influenced by German Expressionism, an artistic movement of the 1910s and 1920s that involved theater, photography, painting, sculpture, and architecture, as well as cinema. The opportunities offered by the booming Hollywood film industry, and, later, the threat of growing Nazi power, led to the emigration of many important film artists working in Germany who had either been directly involved in the Expressionist movement or studied with its practitioners. Fritz Lang’s M (1931), shot only a few years before his departure from Germany, is among the first major crime films of the sound era to join a characteristically noirish visual style with a noir-type plot, one in which the protagonist is a criminal (as are his most successful pursuers). Directors such as Lang, Robert Siodmak, and Michael Curtiz brought a dramatically shadowed lighting style and a psychologically expressive approach to visual composition, or mise-en-scène, with them to Hollywood, where they would make some of the most famous of classic noirs.

By 1931, Curtiz had already been in Hollywood for half a decade, making as many as six films a year. Movies of his such as 20,000 Years in Sing Sing (1932) and Private Detective 62 (1933) are among the early Hollywood sound films arguably classifiable as noir—scholar Marc Vernet offers the latter as evidence that dating the initiation of film noir to 1940 or any other year is “arbitrary”. Giving Expressionist-affiliated filmmakers particularly free stylistic rein were Universal horror pictures such as Dracula (1931), The Mummy (1932)—the former photographed and the latter directed by the Berlin-trained Karl Freund—and The Black Cat (1934), directed by Austrian émigré Edgar G. Ulmer. The Universal horror that comes closest to noir, both in story and sensibility, however, is The Invisible Man (1933), directed by Englishman James Whale and photographed by American Arthur Edeson. Edeson would subsequently photograph The Maltese Falcon (1941), widely regarded as the first major film noir of the classic era.

The Vienna-born but largely American-raised Josef von Sternberg was directing in Hollywood at the same time. Films of his such as Shanghai Express (1932) and The Devil Is a Woman (1935), with their hothouse eroticism and baroque visual style, specifically anticipate central elements of classic noir. The commercial and critical success of Sternberg’s silent Underworld (1927) was largely responsible for spurring a trend of Hollywood gangster films. Successful films in that genre such as Little Caesar (1931), The Public Enemy (1931), and Scarface (1932) demonstrated that there was an audience for crime dramas with morally reprehensible protagonists. An important, and possibly influential, cinematic antecedent to classic noir was 1930s French poetic realism, with its romantic, fatalistic attitude and celebration of doomed heroes. The movement’s sensibility is mirrored in the Warner Bros. drama I Am a Fugitive from a Chain Gang (1932), a key forerunner of noir. Among those films not themselves considered films noir, perhaps none had a greater effect on the development of the genre than Citizen Kane (1941), directed by Orson Welles. Its visual intricacy and complex, voiceover-driven narrative structure are echoed in dozens of classic films noir.

Italian neorealism of the 1940s, with its emphasis on quasi-documentary authenticity, was an acknowledged influence on trends that emerged in American noir. The Lost Weekend (1945), directed by Billy Wilder, another Vienna-born, Berlin-trained American auteur, tells the story of an alcoholic in a manner evocative of neorealism. It also exemplifies the problem of classification: one of the first American films to be described as a film noir, it has largely disappeared from considerations of the field. Director Jules Dassin of The Naked City (1948) pointed to the neorealists as inspiring his use of on-location photography with nonprofessional extras. This semidocumentary approach characterized a substantial number of noirs in the late 1940s and early 1950s. Along with neorealism, the style had a homegrown precedent, specifically cited by Dassin, in director Henry Hathaway’s The House on 92nd Street (1945), which demonstrated the parallel influence of the cinematic newsreel.

Literary sources
Magazine cover with illustration of a terrified-looking, red-haired young woman gagged and bound to a post. She is wearing a low-cut, arm-bearing yellow top and a red skirt. In front of her, a man with a large scar on his cheek and a furious expression heats a branding iron over a gas stove. In the background, a man wearing a trenchcoat and fedora and holding a revolver enters through a doorway. The text includes the tagline “Smashing Detective Stories” and the cover story’s title, “Finger Man”.
The October 1934 issue of Black Mask featured the first appearance of the detective character whom Raymond Chandler would develop into the famous Philip Marlowe.
The primary literary influence on film noir was the hardboiled school of American detective and crime fiction, led in its early years by such writers as Dashiell Hammett (whose first novel, Red Harvest, was published in 1929) and James M. Cain (whose The Postman Always Rings Twice appeared five years later), and popularized in pulp magazines such as Black Mask. The classic film noirs The Maltese Falcon (1941) and The Glass Key (1942) were based on novels by Hammett; Cain’s novels provided the basis for Double Indemnity (1944), Mildred Pierce (1945), The Postman Always Rings Twice (1946), and Slightly Scarlet (1956; adapted from Love’s Lovely Counterfeit). A decade before the classic era, a story by Hammett was the source for the gangster melodrama City Streets (1931), directed by Rouben Mamoulian and photographed by Lee Garmes, who worked regularly with Sternberg. Wedding a style and story both with many noir characteristics, released the month before Lang’s M, City Streets has a claim to being the first major film noir.

Raymond Chandler, who debuted as a novelist with The Big Sleep in 1939, soon became the most famous author of the hardboiled school. Not only were Chandler’s novels turned into major noirs—Murder, My Sweet (1944; adapted from Farewell, My Lovely), The Big Sleep (1946), and Lady in the Lake (1947)—he was an important screenwriter in the genre as well, producing the scripts for Double Indemnity, The Blue Dahlia (1946), and Strangers on a Train (1951). Where Chandler, like Hammett, centered most of his novels and stories on the character of the private eye, Cain featured less heroic protagonists and focused more on psychological exposition than on crime solving; the Cain approach has come to be identified with a subset of the hardboiled genre dubbed “noir fiction”.

For much of the 1940s, one of the most prolific and successful authors of this often downbeat brand of suspense tale was Cornell Woolrich (sometimes under the pseudonym George Hopley or William Irish). No writer’s published work provided the basis for more films noir of the classic period than Woolrich’s: thirteen in all, including Black Angel (1946), Deadline at Dawn (1946), and Fear in the Night (1947).

Another crucial literary source for film noir was W. R. Burnett, whose first novel to be published was Little Caesar, in 1929. It would be turned into a hit for Warner Bros. in 1931; the following year, Burnett was hired to write dialogue for Scarface, while The Beast of the City (1932) was adapted from one of his stories. At least one important reference work identifies the latter as a film noir despite its early date. Burnett’s characteristic narrative approach fell somewhere between that of the quintessential hardboiled writers and their noir fiction compatriots—his protagonists were often heroic in their way, a way just happening to be that of the gangster. During the classic era, his work, either as author or screenwriter, was the basis for seven films now widely regarded as films noir, including three of the most famous: High Sierra (1941),

Ep#30 HowTo Apply FullFashion Nylons Blind Folded TMJS


NAZ_0002 - Copy


001 - Copy tmjs 30 pic
Shop Amazon – Give the Gift of Amazon Prime 6a00d8341c8c6253ef011571db2643970b-250wi 182.3L 1920s-isis-full-fashioned-stockings-box-620x826 191355337512_1 download (1) download (2) download (3) download (4) download il_340x270.12739000 il_570xN.729531280_qxgf images (1) images (2) images (3) images (4) images (5) images (6) images (7) images (8) images (9) images (10) images (11) images (12) Shop Amazon – GoPro HERO4 Session for $299.99 images (13) images (14) Shop Amazon – Premium Home Audio Shop

images (15) images (16) images

Introducing Amazon Handmade

paperwinki tiki 001 - Copy

TheMaryJaneStyle How To Walk in The Highest of HighHeels Ep#29

TheMaryJaneStyle How To Walk in The Highest of High Heels Ep#29

1280px-Woman's_yellow_silk_shoes_1760s
High-heeled footwear is footwear that raises the heel of the wearer’s foot significantly higher than the toes. When both the heel and the toes are raised equal amounts,
as in a platform shoe, it is technically not considered to be a high heel; however, there are also high-heeled platform shoes. High heels tend to give the aesthetic illusion
of longer, more slender legs. High heels come in a wide variety of styles, and the heels are found in many different shapes, including stiletto, pump (court shoe), block,
tapered, blade, and wedge.

 

According to high-fashion shoe websites like Jimmy Choo and Gucci, a “low heel” is considered less than 2.5 inches (6.4 centimeters), while heels between 2.5 and 3.5 inches
(6.4 and 8.9 cm) are considered “mid heels”, and anything over that is considered a “high heel”. The apparel industry would appear to take a simpler view; the term
“high heels” covers heels ranging from 2 to 5 inches (5.1 to 12.7 cm) or more. Extremely high-heeled shoes, such as those exceeding 6 inches (15 cm), strictly speaking,
are no longer considered apparel but rather something akin to “jewelry for the feet”. They are worn for display or the enjoyment of the wearer.

Although high heels are now usually worn only by girls and women, there are shoe designs worn by both genders that have elevated heels, including cowboy boots
and Cuban heels. In previous ages, men also wore high heels.

In the ninth century, Persian horseback warriors wore an extended heel made up for keeping feet from sliding out of stirrups. This also kept riders still when they needed
to stand up and shoot arrows.

20151205_170144
Stiletto heel

A shoe with a stiletto heel
A stiletto heel is a long, thin, high heel found on some boots and shoes, usually for women.

It is named after the stiletto dagger, the phrase being first recorded in the early 1930s. Stiletto heels may vary in length from 2.5 centimeters (1 inch) to 25 cm
(10 inches) or more if a platform sole is used, and are sometimes defined as having a diameter at the ground of less than 1 cm (slightly less than half an inch).
Stiletto-style heels 5 cm (2.0 in) or shorter are called kitten heels.

Not all high slim heels merit the description stiletto. The extremely slender original Italian-style stiletto heels of the late 1950s and very early 1960s were no more
than 5 mm (0.20 in) in diameter for much of their length, although the heel sometimes flared out a little at the top-piece (tip). After their demise in the mid-late 1960s,
such slender heels were difficult to find until recently due to changes in the way heels were mass-produced. A real stiletto heel has a stem of solid steel or alloy.
The more usual method of mass-producing high shoe heels, i.e. molded plastic with an internal metal tube for reinforcement, does not achieve the true stiletto shape.

A pair of shoes with 12 cm stiletto heels
Relatively thin high heels were certainly around in the late 19th century, as numerous fetish drawings attest. Firm photographic evidence exists in the form of photographs
of Parisian singer Mistinguett from the 1940s. These shoes were designed by Andre Perugia, who began designing shoes in 1906. It seems unlikely that he invented the stiletto,
but he is probably the first firmly documented designer of the high, slim heel. The word stiletto is derived from stiletto, which is a long thin blade, similar in profile
to the heel of the shoe. Its usage in footwear first appeared in print in the New Statesman magazine in 1959: “She came …forward, her walk made lopsided by the absence of
one heel of the stilettos”.

20151205_142212
High heel shoes were worn by men and women courtiers. The stiletto heel came with the advent of technology using a supporting metal shaft or stem embedded into the heel,
instead of wood or other, weaker materials that required a wide heel. This revival of the opulent heel style can be attributed to the designer Roger Vivier and such designs
became very popular in the 1950s.

 

As time went on, stiletto heels would become known more for their erotic nature than for their ability to make height. Stiletto heels are a common fetish item. As a fashion
item, their popularity has changed over time. After an initial wave of popularity in the 1950s, they reached their most refined shape in the early 1960s, when the toes of
the shoes which bore them became as slender and elongated as the stiletto heels themselves. As a result of the overall sharpness of outline, it was customary for women to
refer to the whole shoe as a “stiletto”, not just the heel, via synecdoche (pars pro toto). Although they officially faded from the scene after the Beatle era began, their
popularity continued at street level, and women stubbornly refused to give them up even after they could no longer readily find them in the mainstream shops. A version of
the stiletto heel was reintroduced in 1974 by Manolo Blahnik, who dubbed his “new” heel the “Needle”. Similar heels were stocked at the big Biba store in London, by Russell
& Bromley and by smaller boutiques. Old, unsold stocks of pointed-toe stilettos and contemporary efforts to replicate them (lacking the true stiletto heel because of changes
in the way heels were by then being mass-produced) were sold in street fashion markets and became popular with punks and with other fashion “tribes” of the late 1970s until
supplies of the inspirational original styles dwindled in the early 1980s. Subsequently, round-toe shoes with slightly thicker (sometimes cone-shaped) semi-stiletto heels,
often very high in an attempt to convey slenderness were frequently worn at the office with wide-shouldered power suits. The style survived through much of the 1980s but
almost completely disappeared during the 1990s, when professional and college-age women took to wearing shoes with thick, block heels. The slender stiletto heel staged a
major comeback after 2000 when young women adopted the style for dressing up office wear or adding a feminine touch to casual wear, like jeans.

 

Stiletto heels are particularly associated with the image of the femme fatale. They are often considered to be a seductive item of clothing, and often feature in
popular culture in this context.

IMG_5487
History
Medieval Europeans wore wooden-soled paten shoes, which were ancestors to contemporary high heels. Elizabeth Semmelhack, curator at Toronto’s Bata Shoe Museum,
traces the high heel to Persian horse riders in the Near East who used high heels for functionality, because they helped hold the rider’s foot in stirrups.
She states that this footwear is depicted on a 9th-century ceramic bowl from Persia.

 

It is sometimes suggested that raised heels were a response to the problem of the rider’s foot slipping forward in stirrups while riding.
The “rider’s heel”, approximately 1 1⁄2 inches (3.8 cm) high, appeared in Europe around 1600. The leading edge was canted forward to help grip the stirrup, and the trailing
edge was canted forward to prevent the elongated heel from catching on underbrush or rock while backing up, such as in on-foot combat. These features are evident today
in riding boots, notably cowboy boots.

Ancient Egypt

Early depictions of high heels could be seen on ancient Egyptian murals, dating back to 3500 BC. These murals would depict Egyptian nobilities wearing heels to set them
apart from the lower class, who would normally go barefoot. Heeled shoes were worn by both men and women, and most commonly for ceremonial purposes. However, high heels also
served a practical purpose for Egyptian butchers who wore them in order to walk over the bloodied bodies of animal carcasses. During Egyptian times, heels were leather
pieces that were held together by lacing to form the symbol of “Ankh”, signifying life.

Ancient Greece and Rome

Platform sandals called “kothorni” or “buskins” were shoes with high wooden cork soles worn during ancient Greek and Roman era. They were particularly popular among the
actors who would wear them to differentiate the social classes and importance of each character. In ancient Rome, where sex trade was legal, high heels were used to identify
those within the trade to potential clients and high heels became associated with prostitution.

71-n
Contemporary scene

Since the Second World War, high heels have fallen in and out of popular fashion trend several times, most notably in the late 1990s, when lower heels and even flats
predominated[citation needed]. Lower heels were preferred during the late 1960s and early 1970s as well, but higher heels returned in the late 1980s and early 1990s.
The shape of the fashionable heel has also changed from block (1970s) to tapered (1990s), and stiletto (1950s, early 1960s, 1980s, and post-2000).

Today, high heels are typically worn, with heights varying from a kitten heel of 1.5 inches (3.8 cm) to a stiletto heel (or spike heel) of 5 inches (13 cm) or more.
Extremely high-heeled shoes, such as those higher than 6 inches (15 cm), are normally worn only for aesthetic reasons and are not considered practical. Court shoes
are conservative styles and often used for work and formal occasions, while more adventurous styles are common for evening wear and dancing. High heels have seen
significant controversy in the medical field lately, with many podiatrists seeing patients whose severe foot problems have been caused almost exclusively by high-heel wear.

The wedge heel is informally another style of the heel, where the heel is in a wedge form and continues all the way to the toe of the shoe.

 

BW  Stilletos fbj

Negative effects

The case against wearing high heels is based almost exclusively on health and practicality reasons, including that they:
can cause foot and tendon pain;
increase the likelihood of sprains and fractures;
make calves look more rigid and sinewy;
can create foot deformities, including hammer toes and bunions;
can cause an unsteady gait;
can shorten the wearer’s stride.
can render the wearer unable to run;
can exacerbate lower back pain;
alter forces at the knee so as to predispose the wearer to degenerative changes in the knee joint;
can result after frequent wearing in a higher incidence of degenerative joint disease of the knees. This is because they cause a decrease in the normal rotation of the foot, which puts more rotation stress on the knee.
can cause damage to soft floors if they are thin or metal-tipped.
Dress and Stilletos On Full Fashion Nylons fbj

Nylon Feet fbj
Positive effects

 

The case for wearing high heels is based almost exclusively on aesthetic reasons, including that they:
change the angle of the foot with respect to the lower leg, which accentuates the appearance of calves;
change the wearer’s posture, requiring a more upright carriage and altering the gait in what is considered a seductive fashion;
make the wearer appear taller;
make the legs appear longer;
make the foot appear smaller;
make the toes appear shorter;
make the arches of the feet higher and better defined;
according to a single line of research, they may improve the muscle tone of some women’s pelvic floor, thus possibly reducing female incontinence,
although these results have been disputed.
offer practical benefits for people of short stature in terms of improving access and using items, e.g. sitting upright with feet on floor instead of suspended,
reaching items on shelves, etc.
Nylons off fbj
During the 16th century, European royalty, such as Catherine de Medici and Mary I of England, started wearing high-heeled shoes to make them look taller or larger than life.
By 1580, men also wore them, and a person with authority or wealth was often referred to as “well-heeled”.

In modern society, high-heeled shoes are a part of women’s fashion, perhaps more as a sexual prop. High heels force the body to tilt, emphasizing the buttocks and breasts.
They also emphasize the role of feet in sexuality, and the act of putting on stockings or high heels is often seen as an erotic act. This desire to look sexy and erotic
continues to drive women to wear high-heeled shoes, despite causing significant pain in the ball of the foot, or bunions or corns, or hammer toe. A survey conducted by the
American Podiatric Medical Association showed some 42% of women admitted that they would wear a shoe they liked even if it gave them discomfort.

 

 

 

Types of high heels

Types of heels found on high-heeled footwear include:
cone: a round heel that is broad where it meets the sole of the shoe and noticeably narrower at the point of contact with the ground
kitten: a short, slim heel with maximum height under 2 inches and diameter of no more than 0.4 inch at the point of contact with the ground
prism: three flat sides that form a triangle at the point of contact with the ground
puppy: thick square block heel approximately 2 inches in diameter and height
spool or louis: broad where it meets the sole and at the point of contact with the ground; noticeably narrower at the midpoint between the two
stiletto: a tall, slim heel with minimum height of 2 inches and diameter of no more than 0.4 inch at the point of contact with the ground
wedge: occupies the entire space under the arch and heel portions of the foot.
arch: minimum of 7″ and only worn by teens

 

Men and heels

The Vision of Saint Eustace, Pisanello, 1438–1442. Rider wearing high heels.
Elizabeth Semmelhack, curator for the Bata Shoe Museum, traces the high heel to male horse-riding warriors in the Middle East who used high heels for functionality,
because they help hold the rider’s foot in stirrups. She states that the earliest high heel she has seen is depicted on a 9th-century AD ceramic bowl from Persia.

Since the late 18th century, men’s shoes have featured lower heels than most women’s shoes. Some attribute it to Napoleon who disliked high heels; others to the
general trend of minimizing non-functional items in men’s clothing. Cowboy boots remain a notable exception, and they continue to be made with a taller riding heel.
The two-inch Cuban heel featured in many styles of men’s boot derives its heritage from certain Latino roots, most notably various forms of Spanish and Latin American dance,
including Flamenco, as most recently evidenced by Joaquín Cortés. Cuban heels were first widely popularized, however, by Beatle boots, as worn by the English rock group
The Beatles during their introduction to the United States. Some say this saw the re-introduction of higher-heeled footwear for men in the 1960s and 1970s
(in Saturday Night Fever, John Travolta’s character wears a Cuban heel in the opening sequence). The singer Prince is known to wear high heels, as well as Elton John.
Bands such as Mötley Crüe and Sigue Sigue Sputnik predominantly wore high heels during the 1980s. Current well-known male heel wearers include Prince, Justin Tranter,
lead singer of Semi Precious Weapons, and Bill Kaulitz, the lead singer of Tokio Hotel. Popular R&B singer Miguel was wearing his trademark Cuban heels during the “legdrop”
incident at the 2013 Billboard Music Awards.Winklepicker boots often feature a Cuban heel.

Accessories

The stiletto of certain kinds of high heels can damage some types of floors. Such damage can be prevented by heel protectors, also called covers, guards, or taps,
which fit over the stiletto tips to keep them from direct, marring contact with delicate surfaces, such as linoleum (rotogravure) or urethane-varnished wooden floors.
Heel protectors are widely used in ballroom dancing, as such dances are often held on wooden flooring. The bottom of most heels usually has a plastic or metal heel tip
that wears away with use and can be easily replaced. Dress heels (high-heeled shoes with elaborate decoration) are worn for formal occasions.

 

Other uses for specialized high heel protectors make it feasible to walk on grass or soft earth, but not mud, sand, and water, during outdoor events, removing the need to
have specialized carpeting or flooring on an outdoor or soft surface. Certain heel protectors also improve the balance of the shoe and reduce the strain that certain
high heeled or stiletto shoes can place on the foot.

Health effects
Foot and tendon problems

High-heeled shoes slant the foot forward and down while bending the toes up. The more the feet are forced into this position, the more it may cause the gastrocnemius muscle
(part of the calf muscle) to shorten. This may cause problems when the wearer chooses lower heels or flat-soled shoes. When the foot slants forward, a much greater weight
is transferred to the ball of the foot and the toes, increasing the likelihood of damage to the underlying soft tissue that supports the foot. In many shoes, style dictates
function, either compressing the toes or forcing them together, possibly resulting in blisters, corns, hammer toes, bunions (hallux valgus), Morton’s neuroma, plantar
fasciitis and many other medical conditions, most of which are permanent and require surgery to alleviate the pain. High heels, because they tip the foot forward,
put pressure on the lower back by making the rump push outwards, crushing the lower back vertebrae and contracting the muscles of the lower back.

 

If the wearer believes it is not possible to avoid high heels altogether, it is suggested that the wearer spend at least a third of the time they spend on their feet
in contour-supporting “flat” shoes (such as exercise sandals), or well-cushioned sneaker-type shoes, saving high heels for special occasions; or if it is a necessity in
their job, such as a lawyer, it is recommended that they limit the height of the heel that they wear, or, if they are in court, remain seated as much as possible to avoid
damage to the feet. It is also recommended to wear a belt if possible with heels, because the elevation of the foot and extension of the leg can cause pants to become looser
than wanted. In the winter time, one could also use seat warmers with heels to relax and loosen muscles all over the body.

One of the most critical problems of high-heeled shoe design involves a properly constructed toe-box. Improper construction here can cause the most damage to one’s foot.
Toe-boxes that are too narrow force the toes to be crammed too close together. Ensuring that room exists for the toes to assume a normal separation so that high-heel wear
remains an option rather than a debilitating practice is an important issue in improving the wear ability of high-heeled fashion shoes.

Wide heels do not necessarily offer more stability, and any raised heel with too much width, such as found in “blade-heeled” or “block-heeled” shoes, induces unhealthy
side-to-side torque to the ankles with every step, stressing them unnecessarily, while creating additional impact on the balls of the feet. Thus, the best design for a
high heel is one with a narrower width, where the heel is closer to the front, more solidly under the ankle, where the toe box provides room enough for the toes, and where
forward movement of the foot in the shoe is kept in check by material snug across the instep, rather than by the toes being rammed forward and jamming together in the
toe box or crushed into the front of the toe box.

Pelvic floor muscle tone

A 2008 study by Cerruto et al. reported results that suggest that wearing high heels may improve the muscle tone of a woman’s pelvic floor. The authors speculated that this
could have a beneficial effect on female stress urinary incontinence.

 

Feminist attitudes

The high heel has been a central battleground of sexual politics ever since the emergence of the women’s liberation movement of the 1970s. Many second-wave feminists
rejected what they regarded as constricting standards of female beauty, created for the subordination and objectifying of women and self-perpetuated by reproductive
competition and women’s own aesthetics.

The British-American journalist Hadley Freeman wrote, “For me, high heels are just fancy foot binding with a three-figure price tag”, although she supported the
freedom to choose what to wear and stated that “one person’s embrace of their sexuality is another person’s patriarchal oppression.”

 

 

Tall Stilletos fbj

Vintage Stockings Ep.28




IMG_20151016_002343

2015-10-12_06.48.06


tmjs 28 tn

tmjs 28 tn

TheMaryJaneStyle Ep.27 Ch.1-3 “History of Cameras”

tmjs ch1
ch 1 history

The history of the camera can be traced much further back than the introduction of photography. Cameras evolved from the camera obscura, and continued to change through many generations of photographic technology, including daguerreotypes, calotypes, dry plates, film, and digital cameras.

The camera obscura
A camera obscura (Latin: “dark chamber”) is an optical device that led to photography and the photographic camera. The device consists of a box or room with a hole in one side. Light from an external scene passes through the hole and strikes a surface inside, where it is reproduced, rotated 180 degrees (thus upside-down), but with color and perspective preserved. The image can be projected onto paper, and can then be traced to produce a highly accurate representation. The largest camera obscura in the world is on Constitution Hill in Aberystwyth, Wales.[1]

Using mirrors, as in an 18th-century overhead version, it is possible to project a right-side-up image. Another more portable type is a box with an angled mirror projecting onto tracing paper placed on the glass top, the image being upright as viewed from the back.

As the pinhole is made smaller, the image gets sharper, but the projected image becomes dimmer. With too small a pinhole, however, the sharpness worsens, due to diffraction. Most practical camera obscuras use a lens rather than a pinhole (as in a pinhole camera) because it allows a larger aperture, giving a usable brightness while maintaining focus.
An artist using an 18th-century camera obscura to trace an image
Photographic cameras were a development of the camera obscura, a device possibly dating back to the ancient Chinese[1] and ancient Greeks,[2][3] which uses a pinhole or lens to project an image of the scene outside upside-down onto a viewing surface.

An Arab physicist, Ibn al-Haytham, published his Book of Optics in 1021 AD. He created the first pinhole camera after observing how light traveled through a window shutter. Ibn al-Haytham realized that smaller holes would create sharper images. Ibn al-Haytham is also credited with inventing the first camera obscura.[4]

On 24 January 1544 mathematician and instrument maker Reiners Gemma Frisius of Leuven University used one to watch a solar eclipse, publishing a diagram of his method in De Radio Astronimica et Geometrico in the following year.[5] In 1558 Giovanni Batista della Porta was the first to recommend the method as an aid to drawing.[6]
Early fixed images
The first partially successful photograph of a camera image was made in approximately 1816 by Nicéphore Niépce,[7][8] using a very small camera of his own making and a piece of paper coated with silver chloride, which darkened where it was exposed to light. No means of removing the remaining unaffected silver chloride was known to Niépce, so the photograph was not permanent, eventually becoming entirely darkened by the overall exposure to light necessary for viewing it. In the mid-1820s, Niépce used a sliding wooden box camera made by Parisian opticians Charles and Vincent Chevalier to experiment with photography on surfaces thinly coated with Bitumen of Judea.[9] The bitumen slowly hardened in the brightest areas of the image. The unhardened bitumen was then dissolved away. One of those photographs has survived.
Before the invention of photographic processes there was no way to preserve the images produced by these cameras apart from manually tracing them. The earliest cameras were room-sized, with space for one or more people inside; these gradually evolved into more and more compact models such as that by Niépce’s time portable handheld cameras suitable for photography were readily available. The first camera that was small and portable enough to be practical for photography was envisioned by Johann Zahn in 1685, though it would be almost 150 years before such an application was possible.
The history of photography has roots in remote antiquity with the discovery of the principle of the camera obscura and the observation that some substances are visibly altered by exposure to light. As far as is known, nobody thought of bringing these two phenomena together to capture camera images in permanent form until around 1800, when Thomas Wedgwood made the first reliably documented although unsuccessful attempt. In the mid-1820s, Nicéphore Niépce succeeded, but several days of exposure in the camera were required and the earliest results were very crude. Niépce’s associate Louis Daguerre went on to develop the daguerreotype process, the first publicly announced photographic process, which required only minutes of exposure in the camera and produced clear, finely detailed results. It was commercially introduced in 1839, a date generally accepted as the birth year of practical photography.[1]
Daguerreotypes and calotypes
After Niépce’s death in 1833, his partner Louis Daguerre continued to experiment and by 1837 had created the first practical photographic process, which he named the daguerreotype and publicly unveiled in 1839.[10] Daguerre treated a silver-plated sheet of copper with iodine vapor to give it a coating of light-sensitive silver iodide. After exposure in the camera, the image was developed by mercury vapor and fixed with a strong solution of ordinary salt (sodium chloride). Henry Fox Talbot perfected a different process, the calotype, in 1840. As commercialized, both processes used very simple cameras consisting of two nested boxes. The rear box had a removable ground glass screen and could slide in and out to adjust the focus. After focusing, the ground glass was replaced with a light-tight holder containing the sensitized plate or paper and the lens was capped. Then the photographer opened the front cover of the holder, uncapped the lens, and counted off as many seconds—or minutes—as the lighting conditions seemed to require before replacing the cap and closing the holder. Despite this mechanical simplicity, high-quality achromatic lenses were standard.[11]
Daguerreotype

Daguerreotype of Louis Daguerre in 1844 by Jean-Baptiste Sabatier-Blot
The daguerreotype (/dəˈɡɛrɵtaɪp/; French: daguerréotype) process, or daguerreotypy, was the first publicly announced photographic process, and for nearly twenty years, it was the one most commonly used. It was invented by Louis-Jaques-Mandé Daguerre and introduced worldwide in 1839.[1][2][3] By 1860, new processes which were less expensive and produced more easily viewed images had almost completely replaced it. During the past few decades, there has been a small-scale revival of daguerreotypy among photographers interested in making artistic use of early photographic processes.

To make a daguerreotype, the daguerreotypist polished a sheet of silver-plated copper to a mirror finish; treated it with fumes that made its surface light-sensitive; exposed it in a camera for as long as was judged to be necessary, which could be as little as a few seconds for brightly sunlit subjects or much longer with less intense lighting; made the resulting latent image on it visible by fuming it with mercury vapor; removed its sensitivity to light by liquid chemical treatment; rinsed and dried it; then sealed the easily marred result behind glass in a protective enclosure.

Viewing a daguerreotype is unlike looking at any other type of photograph. The image does not sit on the surface of the metal, but appears to be floating in space, and the illusion of reality, especially with examples that are sharp and well exposed is unique to the process.

The image is on a mirror-like silver surface, normally kept under glass, and will appear either positive or negative, depending on the angle at which it is viewed, how it is lit and whether a light or dark background is being reflected in the metal. The darkest areas of the image are simply bare silver; lighter areas have a microscopically fine light-scattering texture. The surface is very delicate, and even the lightest wiping can permanently scuff it. Some tarnish around the edges is normal, and any treatment to remove it should be done only by a specialized restorer.

Several types of antique photographs, most often ambrotypes and tintypes, but sometimes even old prints on paper, are very commonly misidentified as daguerreotypes, especially if they are in the small, ornamented cases in which daguerreotypes made in the US and UK were usually housed. The name “daguerreotype” correctly refers only to one very specific image type and medium, the product of a process that was in wide use only from the early 1840s to the late 1850s.

History
Since the Renaissance era, artists and inventors had searched for a mechanical method of capturing visual scenes.[4] Previously, using the camera obscura, artists would manually trace what they saw, or use the optical image in the camera as a basis for solving the problems of perspective and parallax, and deciding color values. The camera obscura’s optical reduction of a real scene in three-dimensional space to a flat rendition in two dimensions influenced western art, so that at one point, it was thought that images based on optical geometry (perspective) belonged to a more advanced civilization. Later, with the advent of Modernism, the absence of perspective in oriental art from China, Japan and in Persian miniatures was revalued.

In the early seventeenth century, the Italian physician and chemist Angelo Sala wrote that powdered silver nitrate was blackened by the sun, but did not find any practical application of the phenomenon.

Previous discoveries of photosensitive methods and substances—including silver nitrate by Albertus Magnus in the 13th century,[5] a silver and chalk mixture by Johann Heinrich Schulze in 1724,[6][7] and Joseph Niépce’s bitumen-based heliography in 1822 contributed to development of the daguerreotype.[4][8]

The first reliably documented attempt to capture the image formed in a camera obscura was made by Thomas Wedgwood as early as the 1790s, but according to an 1802 account of his work by Sir Humphry Davy:

“The images formed by means of a camera obscura have been found too faint to produce, in any moderate time, an effect upon the nitrate of silver. To copy these images was the first object of Mr. Wedgwood in his researches on the subject, and for this purpose he first used the nitrate of silver, which was mentioned to him by a friend, as a substance very sensible to the influence of light; but all his numerous experiments as to their primary end proved unsuccessful.”[9]

Development in France
In 1829 French artist and chemist Louis Jacques-Mandé Daguerre, contributing a cutting edge camera design, partnered with Niépce, a leader in photochemistry, to further develop their technologies.[4] The two men came into contact through their optician, Chevalier, who supplied lenses for their camera obscuras.

Niépce’s aim originally had been to find a method to reproduce prints and drawings for lithography. He had started out experimenting with light sensitive materials and had made a contact print from a drawing and then went on to successfully make the first photomechanical record of an image in a camera obscura—the world’s first photograph. Niépce’s method was to coat a pewter plate with bitumen of Judea (asphalt) and the action of the light differentially hardened the bitumen. The plate was washed with a mixture of oil of lavender and turpentine leaving a relief image. Niépce called his process heliography and the exposure for the first successful photograph was eight hours.

Early experiments required hours of exposure in the camera to produce visible results. Modern photo-historians consider the stories of Daguerre discovering mercury development by accident because of a bowl of mercury left in a cupboard, or, alternatively, a broken thermometer to be spurious.[10] However, there is another story of a fortunate accident, related by Louis Figuier of a silver spoon lying on an iodized silver plate which left its design on the plate by light perfectly.[11] Noticing this, Daguerre wrote to Niépce on 21 May 1831 suggesting the use of iodized silver plates as a means of obtaining light images in the camera. Letters from Niépce to Daguerre dated 24 June and 8 November 1831, show that Niépce was unsuccessful in obtaining satisfactory results following Daguerre’s suggestion, although he had produced a negative on an iodized silver plate in the camera. Niépce’s letters to Daguerre dated 29 January and 3 March 1832 show that the use of iodized silver plates was due to Daguerre and not Niépce.[12]

Jean-Baptiste Dumas, who was president of the National Society for the Encouragement of Science[13] and a chemist, put his laboratory at Daguerre’s disposal. According to Austrian chemist Josef Maria Eder, Daguerre was not versed in chemistry and it was Dumas who suggested Daguerre use sodium hyposulfite, discovered by Herschel in 1819, as a fixer to dissolve the unexposed silver salts.[7][12]

First mention in print (1835) and public announcement (1839)
At the end of a review of one of Daguerre’s Diorama spectacles in the Journal des artistes on 27 September 1835.[14] a Diorama painting of a landslide that occurred in “La Vallée de Goldau” a paragraph tacked on to the end of the review made passing mention of rumour that was going around the Paris studios of Daguerre’s attempts to make a visual record on metal plates of the fleeting image produced by the camera obscura:

“It is said that Daguerre has found the means to collect, on a plate prepared by him, the image produced by the camera obscura, in such a way that a portrait, a landscape, or any view, projected upon this plate by the ordinary camera obscura, leaves an imprint in light and shade there, and thus presents the most perfect of all drawings … a preparation put over this image preserves it for an indefinite time … the physical sciences have perhaps never presented a marvel comparable to this one.”[15]

A further clue to fixing the date of invention of the process is that when the Paris correspondent of the London periodical The Athenaeum reported the public announcement of the daguerreotype in 1839, he mentioned that the daguerreotypes now being produced were considerably better than the ones he had seen “four years earlier”.

François Arago announced the daguerreotype process at a joint meeting of the French Academy of Sciences and the Académie des Beaux-Arts on 9 January 1839. Daguerre was present, but complained of a sore throat. Later that year William Fox Talbot announced his silver chloride “sensitive paper” process.[16] Together, these announcements cause commentators to choose the 1839 as the year photography was born, or made public, although of course Daguerre had been producing daguerreotypes since 1835 and kept the process secret. [17]

Daguerre and Niépce had together signed an agreement in which remuneration for the invention would be paid for by subscription. However, the campaign they launched to finance the invention failed. François Arago, whose views on the system of patenting inventions can be gathered from speeches he made later in the House of Deputies (he apparently thought the English patent system had advantages over the French one).

Daguerre did not patent and profit from his invention in the usual way. Instead, it was arranged that the French government would acquire the rights in exchange for a lifetime pension. The government would then present the daguerreotype process “free to the world” as a gift, which it did on 19 August 1839. However, five days previously to this, Miles Berry, a patent agent acting on Daguerre’s behalf filed for patent No. 8194 of 1839: “A New or Improved Method of Obtaining the Spontaneous Reproduction of all the Images Received in the Focus of the Camera Obscura.” The patent applied to “England, Wales, and the town of Berwick-upon-Tweed, and in all her Majesty’s Colonies and Plantations abroad.”[18][19] This was the usual wording of English patent specifications before 1852. It was only after the 1852 Act, which unified the patent systems of England, Ireland and Scotland, that a single patent protection was automatically extended to the whole of the British Isles, including the Channel Isles and the Isle of Man. Richard Beard bought the patent rights from Miles Berry, and also obtained a Scottish patent, which he apparently did not enforce. The United Kingdom and the “Colonies and Plantations abroad” therefore became the only places where a license was legally required to make and sell daguerreotypes.[19][20]

Much of Daguerre’s early work was destroyed when his home and studio caught fire on 8 March 1839, while the painter Samuel Morse was visiting from the US.[21][page needed] Malcolm Daniel points out that “fewer than twenty-five securely attributed photographs by Daguerre survive—a mere handful of still lifes, Parisian views, and portraits from the dawn of photography.”[22]

Calotype or talbotype is an early photographic process introduced in 1841 by William Henry Fox Talbot,[1] using paper[2] coated with silver iodide. The term calotype comes from the Greek καλός (kalos), “beautiful”, and τύπος (tupos), “impression”.

Late 19th century studio camera
Dry plates
Collodion dry plates had been available since 1855, thanks to the work of Désiré van Monckhoven, but it was not until the invention of the gelatin dry plate in 1871 by Richard Leach Maddox that the wet plate process could be rivaled in quality and speed. The 1878 discovery that heat-ripening a gelatin emulsion greatly increased its sensitivity finally made so-called “instantaneous” snapshot exposures practical. For the first time, a tripod or other support was no longer an absolute necessity. With daylight and a fast plate or film, a small camera could be hand-held while taking the picture. The ranks of amateur photographers swelled and informal “candid” portraits became popular. There was a proliferation of camera designs, from single- and twin-lens reflexes to large and bulky field cameras, simple box cameras, and even “detective cameras” disguised as pocket watches, hats, or other objects.

The short exposure times that made candid photography possible also necessitated another innovation, the mechanical shutter. The very first shutters were separate accessories, though built-in shutters were common by the end of the 19th century.[11]

Kodak and the birth of film

Kodak No. 2 Brownie box camera, circa 1910
The use of photographic film was pioneered by George Eastman, who started manufacturing paper film in 1885 before switching to celluloid in 1889. His first camera, which he called the “Kodak,” was first offered for sale in 1888. It was a very simple box camera with a fixed-focus lens and single shutter speed, which along with its relatively low price appealed to the average consumer. The Kodak came pre-loaded with enough film for 100 exposures and needed to be sent back to the factory for processing and reloading when the roll was finished. By the end of the 19th century Eastman had expanded his lineup to several models including both box and folding cameras.

In 1900, Eastman took mass-market photography one step further with the Brownie, a simple and very inexpensive box camera that introduced the concept of the snapshot. The Brownie was extremely popular and various models remained on sale until the 1960s.

Film also allowed the movie camera to develop from an expensive toy to a practical commercial tool.

Despite the advances in low-cost photography made possible by Eastman, plate cameras still offered higher-quality prints and remained popular well into the 20th century. To compete with rollfilm cameras, which offered a larger number of exposures per loading, many inexpensive plate cameras from this era were equipped with magazines to hold several plates at once. Special backs for plate cameras allowing them to use film packs or rollfilm were also available, as were backs that enabled rollfilm cameras to use plates.

Except for a few special types such as Schmidt cameras, most professional astrographs continued to use plates until the end of the 20th century when electronic photography replaced them.
The metal-based daguerreotype process soon had some competition from the paper-based calotype negative and salt print processes invented by Henry Fox Talbot. Subsequent innovations reduced the required camera exposure time from minutes to seconds and eventually to a small fraction of a second; introduced new photographic media which were more economical, sensitive or convenient, including roll films for casual use by amateurs; and made it possible to take pictures in natural color as well as in black-and-white.

The commercial introduction of computer-based electronic digital cameras in the 1990s soon revolutionized photography. During the first decade of the 21st century, traditional film-based photochemical methods were increasingly marginalized as the practical advantages of the new technology became widely appreciated and the image quality of moderately priced digital cameras was continually improved.

Etymology
The coining of the word “photography” is usually attributed to Sir John Herschel in 1839. It is based on the Greek φῶς (phos), (genitive: phōtós) meaning “light”, and γραφή (graphê), meaning “drawing, writing”, together meaning “drawing with light”.[2]

Technological background

A camera obscura used for drawing images
Photography is the result of combining several different technical discoveries. Long before the first photographs were made, Chinese philosopher Mo Ti and Greek mathematicians Aristotle and Euclid described a pinhole camera in the 5th and 4th centuries BCE.[3][4] In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments[5]

Ibn al-Haytham (Alhazen) (965 in Basra – c. 1040 in Cairo) studied the camera obscura and pinhole camera,[4][6] Albertus Magnus (1193/1206–80) discovered silver nitrate, and Georges Fabricius (1516–71) discovered silver chloride. Daniel Barbaro described a diaphragm in 1568. Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694. The novel Giphantie (by the French Tiphaigne de la Roche, 1729–74) described what could be interpreted as photography.

Development of chemical photography
Monochrome process

Earliest known surviving heliographic engraving, 1825, printed from a metal plate made by Joseph Nicéphore Niépce with his “heliographic process”.[7] The plate was exposed under an ordinary engraving and copied it by photographic means. This was a step towards the first permanent photograph from nature taken with a camera obscura.
Around the year 1800, Thomas Wedgwood made the first known attempt to capture the image in a camera obscura by means of a light-sensitive substance. He used paper or white leather treated with silver nitrate. Although he succeeded in capturing the shadows of objects placed on the surface in direct sunlight, and even made shadow-copies of paintings on glass, it was reported in 1802 that “[t]he images formed by means of a camera obscura have been found too faint to produce, in any moderate time, an effect upon the nitrate of silver.” The shadow images eventually darkened all over because “[n]o attempts that have been made to prevent the uncoloured part of the copy or profile from being acted upon by light have as yet been successful.”[8] Wedgwood may have prematurely abandoned his experiments due to frail and failing health; he died aged 34 in 1805.

“Boulevard du Temple”, a daguerreotype made by Louis Daguerre in 1838, is generally accepted as the earliest photograph to include people. It is a view of a busy street, but because the exposure time was at least ten minutes the moving traffic left no trace. Only the two men near the bottom left corner, one apparently having his boots polished by the other, stayed in one place long enough to be visible.
In 1816 Nicéphore Niépce, using paper coated with silver chloride, succeeded in photographing the images formed in a small camera, but the photographs were negatives, darkest where the camera image was lightest and vice versa, and they were not permanent in the sense of being reasonably light-fast; like earlier experimenters, Niépce could find no way to prevent the coating from darkening all over when it was exposed to light for viewing. Disenchanted with silver salts, he turned his attention to light-sensitive organic substances.[9]

Robert Cornelius, self-portrait, Oct. or Nov. 1839, approximate quarter plate daguerreotype. The back reads, “The first light picture ever taken.”

One of the oldest photographic portraits known, made by Joseph Draper of New York, in 1839[10] or 1840, of his sister, Dorothy Catherine Draper.
The oldest surviving permanent photograph of the image formed in a camera was created by Niépce in 1826 or 1827.[1] It was made on a polished sheet of pewter and the light-sensitive substance was a thin coating of bitumen, a naturally occurring petroleum tar, which was dissolved in lavender oil, applied to the surface of the pewter and allowed to dry before use.[11] After a very long exposure in the camera (traditionally said to be eight hours, but in fact probably several days),[12] the bitumen was sufficiently hardened in proportion to its exposure to light that the unhardened part could be removed with a solvent, leaving a positive image with the light regions represented by hardened bitumen and the dark regions by bare pewter.[11] To see the image plainly, the plate had to be lit and viewed in such a way that the bare metal appeared dark and the bitumen relatively light.[9]

In partnership, Niépce (in Chalon-sur-Saône) and Louis Daguerre (in Paris) refined the bitumen process,[13] substituting a more sensitive resin and a very different post-exposure treatment that yielded higher-quality and more easily viewed images. Exposure times in the camera, although somewhat reduced, were still measured in hours.[9]

In 1833 Niépce died suddenly, leaving his notes to Daguerre. More interested in silver-based processes than Niépce had been, Daguerre experimented with photographing camera images directly onto a mirror-like silver-surfaced plate that had been fumed with iodine vapor, which reacted with the silver to form a coating of silver iodide. As with the bitumen process, the result appeared as a positive when it was suitably lit and viewed. Exposure times were still impractically long until Daguerre made the pivotal discovery that an invisibly slight or “latent” image produced on such a plate by a much shorter exposure could be “developed” to full visibility by mercury fumes. This brought the required exposure time down to a few minutes under optimum conditions. A strong hot solution of common salt served to stabilize or fix the image by removing the remaining silver iodide. On 7 January 1839, this first complete practical photographic process was announced at a meeting of the French Academy of Sciences,[14] and the news quickly spread. At first, all details of the process were withheld and specimens were shown only at Daguerre’s studio, under his close supervision, to Academy members and other distinguished guests.[15] Arrangements were made for the French government to buy the rights in exchange for pensions for Niépce’s son and Daguerre and present the invention to the world (with the de facto exception of Great Britain) as a free gift.[16] Complete instructions were published on 19 August 1839.[17]

After reading early reports of Daguerre’s invention, William Henry Fox Talbot, who had succeeded in creating stabilized photographic negatives on paper in 1835, worked on perfecting his own process. In early 1839 he acquired a key improvement, an effective fixer, from John Herschel, the astronomer, who had previously shown that hyposulfite of soda (commonly called “hypo” and now known formally as sodium thiosulfate) would dissolve silver salts.[18] News of this solvent also reached Daguerre, who quietly substituted it for his less effective hot salt water treatment.[19]

A calotype print showing the American photographer Frederick Langenheim (circa 1849). Note, the caption on the photo calls the process Talbotype
Talbot’s early silver chloride “sensitive paper” experiments required camera exposures of an hour or more. In 1840, Talbot invented the calotype process, which, like Daguerre’s process, used the principle of chemical development of a faint or invisible “latent” image to reduce the exposure time to a few minutes. Paper with a coating of silver iodide was exposed in the camera and developed into a translucent negative image. Unlike a daguerreotype, which could only be copied by rephotographing it with a camera, a calotype negative could be used to make a large number of positive prints by simple contact printing. The calotype had yet another distinction compared to other early photographic processes, in that the finished product lacked fine clarity due to its translucent paper negative. This was seen as a positive attribute for portraits because it softened the appearance of the human face. Talbot patented this process,[20] which greatly limited its adoption, and spent many years pressing lawsuits against alleged infringers. He attempted to enforce a very broad interpretation of his patent, earning himself the ill will of photographers who were using the related glass-based processes later introduced by other inventors, but he was eventually defeated. Nonetheless, Talbot’s developed-out silver halide negative process is the basic technology used by chemical film cameras today. Hippolyte Bayard had also developed a method of photography but delayed announcing it, and so was not recognized as its inventor.

In 1839, John Herschel made the first glass negative, but his process was difficult to reproduce. Slovene Janez Puhar invented a process for making photographs on glass in 1841; it was recognized on June 17, 1852 in Paris by the Académie Nationale Agricole, Manufacturière et Commerciale.[21] In 1847, Nicephore Niépce’s cousin, the chemist Niépce St. Victor, published his invention of a process for making glass plates with an albumen emulsion; the Langenheim brothers of Philadelphia and John Whipple and William Breed Jones of Boston also invented workable negative-on-glass processes in the mid-1840s.[22]

In 1851 Frederick Scott Archer invented the collodion process.[citation needed] Photographer and children’s author Lewis Carroll used this process. (Carroll refers to the process as “Tablotype” [sic] in the story “A Photographer’s Day Out”)[23]

Roger Fenton’s assistant seated on Fenton’s photographic van, Crimea, 1855.
Herbert Bowyer Berkeley experimented with his own version of collodion emulsions after Samman introduced the idea of adding dithionite to the pyrogallol developer.[citation needed] Berkeley discovered that with his own addition of sulfite, to absorb the sulfur dioxide given off by the chemical dithionite in the developer, that dithionite was not required in the developing process. In 1881 he published his discovery. Berkeley’s formula contained pyrogallol, sulfite and citric acid. Ammonia was added just before use to make the formula alkaline. The new formula was sold by the Platinotype Company in London as Sulpho-Pyrogallol Developer.[24]

Nineteenth-century experimentation with photographic processes frequently became proprietary. The German-born, New Orleans photographer Theodore Lilienthal successfully sought legal redress in an 1881 infringement case involving his “Lambert Process” in the Eastern District of Louisiana.

Popularization

General view of The Crystal Palace at Sydenham by Philip Henry Delamotte, 1854

Mid 19th century “Brady stand” photo model’s armrest table, meant to keep portrait models more still during long exposure times (studio equipment nicknamed after the famed US photographer, Mathew Brady)

1855 cartoon satirizing problems with posing for Daguerreotypes: slight movement during exposure resulted in blurred features, red-blindness made rosy complexions dark.

A photographer appears to be photographing himself in a 19th-century photographic studio. Note clamp to hold the poser’s head still. An 1893 satire on photographic procedures already becoming obsolete at the time.

A comparison of common print sizes used in photographic studios during the 19th century
The daguerreotype proved popular in response to the demand for portraiture that emerged from the middle classes during the Industrial Revolution.[citation needed] This demand, which could not be met in volume and in cost by oil painting, added to the push for the development of photography.

In 1847, Count Sergei Lvovich Levitsky designed a bellows camera that significantly improved the process of focusing. This adaptation influenced the design of cameras for decades and is still found in use today in some professional cameras. While in Paris, Levitsky would become the first to introduce interchangeable decorative backgrounds in his photos, as well as the retouching of negatives to reduce or eliminate technical deficiencies.[citation needed] Levitsky was also the first photographer to portray a photo of a person in different poses and even in different clothes (for example, the subject plays the piano and listens to himself).[citation needed]

Roger Fenton and Philip Henry Delamotte helped popularize the new way of recording events, the first by his Crimean war pictures, the second by his record of the disassembly and reconstruction of The Crystal Palace in London. Other mid-nineteenth-century photographers established the medium as a more precise means than engraving or lithography of making a record of landscapes and architecture: for example, Robert Macpherson’s broad range of photographs of Rome, the interior of the Vatican, and the surrounding countryside became a sophisticated tourist’s visual record of his own travels.

By 1849, images captured by Levitsky on a mission to the Caucasus were exhibited by the famous Parisian optician Chevalier at the Paris Exposition of the Second Republic as an advertisement of their lenses. These photos would receive the Exposition’s gold medal; the first time a prize of its kind had ever been awarded to a photograph.[citation needed]

That same year in 1849 in his St. Petersburg, Russia studio Levitsky would first propose the idea to artificially light subjects in a studio setting using electric lighting along with daylight. He would say of its use, “as far as I know this application of electric light has never been tried; it is something new, which will be accepted by photographers because of its simplicity and practicality”.[citation needed]

In 1851, at an exhibition in Paris, Levitsky would win the first ever gold medal awarded for a portrait photograph.[citation needed]

In America, by 1851 a broadside by daguerreotypist Augustus Washington was advertising prices ranging from 50 cents to $10.[25] However, daguerreotypes were fragile and difficult to copy. Photographers encouraged chemists to refine the process of making many copies cheaply, which eventually led them back to Talbot’s process.

Ultimately, the photographic process came about from a series of refinements and improvements in the first 20 years. In 1884 George Eastman, of Rochester, New York, developed dry gel on paper, or film, to replace the photographic plate so that a photographer no longer needed to carry boxes of plates and toxic chemicals around. In July 1888 Eastman’s Kodak camera went on the market with the slogan “You press the button, we do the rest”. Now anyone could take a photograph and leave the complex parts of the process to others, and photography became available for the mass-market in 1901 with the introduction of the Kodak Brownie.

Color photography

The first durable color photograph, taken by Thomas Sutton in 1861
A practical means of color photography was sought from the very beginning. Results were demonstrated by Edmond Becquerel as early as 1848, but exposures lasting for hours or days were required and the captured colors were so light-sensitive they would only bear very brief inspection in dim light.

The first durable color photograph was a set of three black-and-white photographs taken through red, green and blue color filters and shown superimposed by using three projectors with similar filters. It was taken by Thomas Sutton in 1861 for use in a lecture by the Scottish physicist James Clerk Maxwell, who had proposed the method in 1855.[26] The photographic emulsions then in use were insensitive to most of the spectrum, so the result was very imperfect and the demonstration was soon forgotten. Maxwell’s method is now most widely known through the early 20th century work of Sergei Prokudin-Gorskii. It was made practical by Hermann Wilhelm Vogel’s 1873 discovery of a way to make emulsions sensitive to the rest of the spectrum, gradually introduced into commercial use beginning in the mid-1880s.

Two French inventors, Louis Ducos du Hauron and Charles Cros, working unknown to each other during the 1860s, famously unveiled their nearly identical ideas on the same day in 1869. Included were methods for viewing a set of three color-filtered black-and-white photographs in color without having to project them, and for using them to make full-color prints on paper.[27]

The first widely used method of color photography was the Autochrome plate, commercially introduced in 1907. It was based on one of Louis Ducos du Hauron’s ideas: instead of taking three separate photographs through color filters, take one through a mosaic of tiny color filters overlaid on the emulsion and view the results through an identical mosaic. If the individual filter elements were small enough, the three primary colors would blend together in the eye and produce the same additive color synthesis as the filtered projection of three separate photographs. Autochrome plates had an integral mosaic filter layer composed of millions of dyed potato starch grains. Reversal processing was used to develop each plate into a transparent positive that could be viewed directly or projected with an ordinary projector. The mosaic filter layer absorbed about 90 percent of the light passing through, so a long exposure was required and a bright projection or viewing light was desirable. Competing screen plate products soon appeared and film-based versions were eventually made. All were expensive and until the 1930s none was “fast” enough for hand-held snapshot-taking, so they mostly served a niche market of affluent advanced amateurs.
35 mm

Leica I, 1925

Argus C3, 1939
See also: History of 135 film
A number of manufacturers started to use 35mm film for still photography between 1905 and 1913. The first 35mm cameras available to the public, and reaching significant numbers in sales were the Tourist Multiple, in 1913, and the Simplex, in 1914.[citation needed]

Oskar Barnack, who was in charge of research and development at Leitz, decided to investigate using 35 mm cine film for still cameras while attempting to build a compact camera capable of making high-quality enlargements. He built his prototype 35 mm camera (Ur-Leica) around 1913, though further development was delayed for several years by World War I. It wasn’t until after World War I that Leica commercialized their first 35mm Cameras. Leitz test-marketed the design between 1923 and 1924, receiving enough positive feedback that the camera was put into production as the Leica I (for Leitz camera) in 1925. The Leica’s immediate popularity spawned a number of competitors, most notably the Contax (introduced in 1932), and cemented the position of 35 mm as the format of choice for high-end compact cameras.

Kodak got into the market with the Retina I in 1934, which introduced the 135 cartridge used in all modern 35 mm cameras. Although the Retina was comparatively inexpensive, 35 mm cameras were still out of reach for most people and rollfilm remained the format of choice for mass-market cameras. This changed in 1936 with the introduction of the inexpensive Argus A and to an even greater extent in 1939 with the arrival of the immensely popular Argus C3. Although the cheapest cameras still used rollfilm, 35 mm film had come to dominate the market by the time the C3 was discontinued in 1966.

The fledgling Japanese camera industry began to take off in 1936 with the Canon 35 mm rangefinder, an improved version of the 1933 Kwanon prototype. Japanese cameras would begin to become popular in the West after Korean War veterans and soldiers stationed in Japan brought them back to the United States and elsewhere.
A new era in color photography began with the introduction of Kodachrome film, available for 16 mm home movies in 1935 and 35 mm slides in 1936. It captured the red, green and blue color components in three layers of emulsion. A complex processing operation produced complementary cyan, magenta and yellow dye images in those layers, resulting in a subtractive color image. Maxwell’s method of taking three separate filtered black-and-white photographs continued to serve special purposes into the 1950s and beyond, and Polachrome, an “instant” slide film that used the Autochrome’s additive principle, was available until 2003, but the few color print and slide films still being made in 2015 all use the multilayer emulsion approach pioneered by Kodachrome.
Digital cameras
See also: Dslr § History
Digital cameras differ from their analog predecessors primarily in that they do not use film, but capture and save photographs on digital memory cards or internal storage instead. Their low operating costs have relegated chemical cameras to niche markets. Digital cameras now include wireless communication capabilities (for example Wi-Fi or Bluetooth) to transfer, print or share photos, and are commonly found on mobile phones.

Early development
The concept of digitizing images on scanners, and the concept of digitizing video signals, predate the concept of making still pictures by digitizing signals from an array of discrete sensor elements. Early spy satellites used the extremely complex and expensive method of de-orbit and airborne retrieval of film canisters. Technology was pushed to skip these steps through the use of in-satellite developing and electronic scanning of the film for direct transmission to the ground. The amount of film was still a major limitation, and this was overcome and greatly simplified by the push to develop an electronic image capturing array that could be used instead of film. The first electronic imaging satellite was the KH-11 launched by the NRO in late 1976. It had a charge-coupled device (CCD) array with a resolution of 800 x 800 pixels (0.64 megapixels).[13] At Philips Labs in New York, Edward Stupp, Pieter Cath and Zsolt Szilagyi filed for a patent on “All Solid State Radiation Imagers” on 6 September 1968 and constructed a flat-screen target for receiving and storing an optical image on a matrix composed of an array of photodiodes connected to a capacitor to form an array of two terminal devices connected in rows and columns. Their US patent was granted on 10 November 1970.[14] Texas Instruments engineer Willis Adcock designed a filmless camera that was not digital and applied for a patent in 1972, but it is not known whether it was ever built.[15] The first recorded attempt at building a digital camera was in 1975 by Steven Sasson, an engineer at Eastman Kodak.[16][17] It used the then-new solid-state CCD image sensor chips developed by Fairchild Semiconductor in 1973.[18] The camera weighed 8 pounds (3.6 kg), recorded black and white images to a compact cassette tape, had a resolution of 0.01 megapixels (10,000 pixels), and took 23 seconds to capture its first image in December 1975. The prototype camera was a technical exercise, not intended for production.

tmjs ch2
ch2 retro cam and film

LIGHTING

<iframe style=”width:120px;height:240px;” marginwidth=”0″ marginheight=”0″ scrolling=”no” frameborder=”0″ src=”//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=tf_til&ad_type=product_link&tracking_id=wwwsmokingm0f-20&marketplace=amazon&region=US&placement=B00E4YS2XU&asins=B00E4YS2XU&linkId=2UF6NNYEYRIQU3WW&show_border=true&link_opens_in_new_window=true”>
</iframe>

In the early days of photography the only source of light was, of course, the sun, so most photography depended upon long days and good weather. It is said that Rejlander used a cat as a primitive exposure meter: placing the cat where the sitter should be, he judged by looking at its eyes whether it was worth taking any photographs or whether his sitter should go home and wait for better times! The nearer to the birth of photography, the greater the amount of lighting needed, as the first chemical emulsions were very insensitive.

The first artificial light photography dates back as far as 1839, when L. Ibbetson used oxy-hydrogen light (also known as limelight) when photographing microscopic objects; he made a daguerreotype in five minutes which, he claimed, would have taken twenty-five minutes in normal daylight.

Other possibilities were explored. Nadar, for example, photographed the sewers in Paris, using battery-operated lighting. Later arc-lamps were introduced, but it was not until 1877 that the first studio lit by electric light was opened by Van der Weyde, who had a studio in Regent Street. Powered by a gas-driven dynamo, the light was sufficient to permit exposures of some 2 to 3 seconds for a carte-de-visite.

Soon a number of studios started using arc lighting. One advert (by Arthur Langton, working in Belgravia, London), boldly proclaims:

“My electric light installation is perhaps the more powerful in London. Photographs superior to daylight, Pictures can now be taken in any weather and at any time.”

More from Arthur Langton’s advertisement:

“CAUTION Many photographers advertise ‘portrits taken by electric light’ but 9 out of 10 do not possess an electric light, owing to its costlinss they use an inferior and nasty substitute… a pyrotechnic powder which gives off poisonos fumes.”

(His spelling, by the way!)

In June 1850 an experiment conducted by Fox Talbot, probably using static electricity stored in Leyden jars, was conducted at the Royal Society: a page of The Times was fastened on to a wheel, which then revolved rapidly. Writing about this the following year Fox Talbot stated:

“From this experiment the conclusion…is that it is within our power to obtain pictures of all moving objects….providing we have the means of sufficiently illuminating them with a sudden electric flash.”

The object then had been to arrest fast action. A few years later William Crookes, editor of the Photographic News (October 1859) was responding to a query put to him on how to light some caves:

“A…brilliant light…can be obtained by burning….magnesium in oxygen. A piece of magnesium wire held by one end in the hand, may be lighted at the other extremity by holding it to a candle… It then burns away of its own accord evolving a light insupportably brilliant to the unprotected eye….”

That same year Professor Robert Bunsen (of Bunsen burner fame) was also advocating the use of magnesium. The first portrait using magnesium was taken by Alfred Brothers of Manchester (22 February 1864); some of the results of his experiments may be found in the Manchester Museum of Science and Technology. It was however very expensive at that time and did not come into general use until there was a dramatic fall in the cost of magnesium a decade later. This, coupled with the introduction of dry plates in the 80s soon led to the introduction of magnesium flashlamps. They all used the same principle: a small amount of this powder would be blown, using a small rubber pump, through a spirit flame, producing a bright flash lasting about 1/15s. It also produced much smoke and ash!

Then in the late 1880s it was discovered that magnesium powder, if mixed with an oxidising agent such as potassium chlorate, would ignite with very little persuasion. This led to the introduction of flash powder. It would be spread on a metal dish the flash powder would be set of by percussion – sparks from a flint wheel, electrical fuse or just by applying a taper. However the explosive flashpowder could be quite dangerous if misused. This was not really superseded until the invention of the flashbulb in the late 1920s.

Early flash photography was not synchronised. This meant that one had to put a camera on a tripod, open the shutter, trigger the flash, and close the shutter again – a technique known as open flash.

Certainly early flash photography could be a hazardous business. It is said, for example, that Riis, working during this period, twice managed to set the places he was photographing on fire!

In fact, the “open flash” technique, with flash powder, was still being used by some photographers until the 1950s. This was particularly so when, for example, a large building was being photographed; with someone operating the shutter for multiple exposures, it was possible to use the flash at different places, to provide more even illumination.

By varying the amount of grammes of flash-powder, the distance covered could also be varied. To give some idea, using a panchromatic film of about 25ASA and open flash technique, at f8, a measure of 0.1 grammes of flash would permit the flash-subject idstance to be about 8 feet, whilst 2.0 grammes would permit an exposure 30 feet away. The earliest known flash bulb was described in 1883. It consisted of a two pint stoppered bottle which had white paper stuck on it to act as a reflector. To set the flash off, a spiral of ten or so inches of magnesium on a wire skewer was pre-lighted and plunged into the oxygen.

It was not to be until 1927 that the simple flash-bulb was to appear, and 1931 when Harold Egerton produced the first electronic flash tube.

Makeup

HISTORY

Makeup has a long theatrical history. The early film industry naturally looked to traditional stage techniques, but these proved inadequate almost immediately. One of makeup’s first problems was with celluloid. Early filmmakers used orthochromatic film stock, which had a limited color-range sensitivity. It reacted to red pigmentation, darkening white skin and nullifying solid reds. To counter the effect, Caucasian actors wore heavy pink greasepaint (Stein’s #2) as well as black eyeliner and dark red lipstick (which, if applied too lightly, appeared white on screen), but these masklike cosmetics smeared as actors sweated under the intense lights. Furthermore, until the mid-teens, actors applied their own makeup and their image was rarely uniform from scene to scene. As the close-up became more common, makeup focused on the face, which had to be understood from a hugely magnified perspective, making refinements essential. In the pursuit of these radical changes, two names stand out as Hollywood’s progenitor artists: Max Factor (1877–1938) and George Westmore (1879–1931). Both started as wigmakers and both recognized that the crucial difference between stage and screen was a lightness of touch. Both invented enduring cosmetics and makeup tricks for cinema and each, at times, took credit for the same invention (such as false eyelashes).

Factor (originally Firestein), a Russian émigré with a background in barbering, arrived in the United States in 1904 and moved to Los Angeles in 1908, where he set up a perfume, hair care, and cosmetics business catering to theatrical needs. He also distributed well-known greasepaints, which were too thick for screen use and photographed badly. By 1910, Factor had begun to divide the theatrical from the cinematic as he experimented to find appropriate cosmetics for film. His Greasepaint was the first makeup used in a screen test, for Cleopatra (1912), and by 1914 Factor had invented a twelve-toned cream version, which applied thinly, allowed for individual skin subtleties, and conformed more comfortably with celluloid. In the early 1920s panchromatic film began to replace orthochromatic, causing fewer color flaws, and in 1928 Factor completed work on Panchromatic MakeUp, which had a variety of hues. In 1937, the year before he died, he dealt with the new Technicolor problems by adapting theatrical “pancake” into a water-soluble powder, applicable with a sponge, excellent for film’s and, eventually, television’s needs. It photographed very well, eliminating the shine induced by Technicolor lighting, and its basic translucence imparted a delicate look. Known as Pancake makeup, it was first used in Vogues of 1938 (1937) and Goldwyn’s Follies (1938), quickly becoming not only the film industry norm but a public sensation. Once movie stars, delighting in its lightness, began to wear it offscreen, Pancake became de rigueur for fashion-conscious women. After Factor’s death, his empire continued to set standards and still covers cinema’s cosmetic needs, from fingernails to toupees.

The English wigmaker George Westmore, for whom the Makeup Artist and Hair Stylist Guild’s George Westmore Lifetime Achievement Award is named, founded the first (and tiny) film makeup department, at Selig Studio in 1917. He also worked at Triangle but soon was freelancing across the major studios. Like Factor, he understood that cosmetic and hair needs were personal and would make up stars such as Mary Pickford (whom he relieved of having to curl her famous hair daily by making false ringlets) or the Talmadge sisters in their homes before they left for work in the morning.

He fathered three legendary and scandalous generations of movie makeup artists, beginning with his six sons—Monte (1902–1940), Perc (1904–1970), Ern (1904–1967), Wally (1906–1973), Bud (1918–1973), and Frank (1923–1985)—who soon eclipsed him in Hollywood. By 1926, Monte, Perc, Ern, and Bud had penetrated the industry to become the chief makeup artists at four major studios, and all continued to break ground in new beauty and horror illusions until the end of their careers. In 1921, after dishwashing at Famous Players-Lasky, Monte became Rudolph Valentino’s sole makeup artist. (The actor had been doing his own.) When Valentino died in 1926, Monte went to Selznick International where, thirteen years later, he worked himself to death with the enormous makeup demands for Gone With the Wind (1939). In 1923 Perc established a blazing career at First National-Warner Bros. and, over twenty-seven years, initiated beauty trends and disguises including, in 1939, the faces of Charles Laughton’s grotesque Hunchback of Notre Dame (for RKO) and Bette Davis’s eyebrowless, almost bald, whitefaced Queen Elizabeth. In the early 1920s he blended Stein Pink greasepaint with eye shadow, preceding Factor’s Panchromatic. Ern, at RKO from 1929 to 1931 and then at Fox from 1935, was adept at finding the right look for stars of the 1930s. Wally headed Paramount makeup from 1926, where he created, among others, Frederic March’s gruesome transformation in Dr. Jekyl and Mr. Hyde (1931). Frank followed him there. Bud led Universal’s makeup department for twenty-three years, specializing in rubber prosthetics and body suits such as the one used in Creature from the Black Lagoon (1954). Together they built the House of Westmore salon, which served stars and public alike.
Later generations have continued the name, including Bud’s sons, Michael and Marvin Westmore, who began in television and have excelled in unusual makeup, such as in Blade Runner (1982).

MGM was the only studio that the Westmores did not rule. Cecil Holland (1887–1973) became its first makeup head in 1925 and remained there until the 1950s. Originally an English actor known as “The Man of a Thousand Faces” before Lon Chaney (1883–1930) inherited the title, his makeup abilities were pioneering on films such as Grand Hotel (1932) and The Good Earth (1937). Jack Dawn (1892–1961), who created makeup for The Wizard of Oz (1939), ran the department from the 1940s, by which time it was so huge that over a thousand actors could be made up in one hour. William

Lon Chaney did his own makeup for Phantom of the Opera (Rupert Julian, 1925).
Tuttle succeeded him and ran the department for twenty years. Like Holland, Chaney was another actor with supernal makeup skills whose horror and crime films became classics, notably for Chaney’s menacing but realistically based disguises. He always created his own makeup, working with the materials of his day—greasepaint, putty, plasto (mortician’s wax), fish skin, gutta percha (natural resin), collodian (liquid elastic), and crepe hair—and conjured characters unrivalled in their horrifying effect, including his gaunt, pig-nosed, black-eyed Phantom for Phantom of the Opera (1925) and his Hunchback in The Hunchback of Notre Dame (1923), for which he constructed agonizingly heavy makeup and body harnesses.

tmjs ch 3
ch3 digital
Digital cameras
See also: Dslr § History
Digital cameras differ from their analog predecessors primarily in that they do not use film, but capture and save photographs on digital memory cards or internal storage instead. Their low operating costs have relegated chemical cameras to niche markets. Digital cameras now include wireless communication capabilities (for example Wi-Fi or Bluetooth) to transfer, print or share photos, and are commonly found on mobile phones.

Early development
The concept of digitizing images on scanners, and the concept of digitizing video signals, predate the concept of making still pictures by digitizing signals from an array of discrete sensor elements. Early spy satellites used the extremely complex and expensive method of de-orbit and airborne retrieval of film canisters. Technology was pushed to skip these steps through the use of in-satellite developing and electronic scanning of the film for direct transmission to the ground. The amount of film was still a major limitation, and this was overcome and greatly simplified by the push to develop an electronic image capturing array that could be used instead of film. The first electronic imaging satellite was the KH-11 launched by the NRO in late 1976. It had a charge-coupled device (CCD) array with a resolution of 800 x 800 pixels (0.64 megapixels).[13] At Philips Labs in New York, Edward Stupp, Pieter Cath and Zsolt Szilagyi filed for a patent on “All Solid State Radiation Imagers” on 6 September 1968 and constructed a flat-screen target for receiving and storing an optical image on a matrix composed of an array of photodiodes connected to a capacitor to form an array of two terminal devices connected in rows and columns. Their US patent was granted on 10 November 1970.[14] Texas Instruments engineer Willis Adcock designed a filmless camera that was not digital and applied for a patent in 1972, but it is not known whether it was ever built.[15] The first recorded attempt at building a digital camera was in 1975 by Steven Sasson, an engineer at Eastman Kodak.[16][17] It used the then-new solid-state CCD image sensor chips developed by Fairchild Semiconductor in 1973.[18] The camera weighed 8 pounds (3.6 kg), recorded black and white images to a compact cassette tape, had a resolution of 0.01 megapixels (10,000 pixels), and took 23 seconds to capture its first image in December 1975. The prototype camera was a technical exercise, not intended for production.

Development of digital photography
Main article: Digital photography
In 1957, a team led by Russell A. Kirsch at the National Institute of Standards and Technology developed a binary digital version of an existing technology, the wirephoto drum scanner, so that alphanumeric characters, diagrams, photographs and other graphics could be transferred into digital computer memory. One of the first photographs scanned was a picture of Kirsch’s infant son Walden. The resolution was 176×176 pixels with only one bit per pixel, i.e., stark black and white with no intermediate gray tones, but by combining multiple scans of the photograph done with different black-white threshold settings, grayscale information could also be acquired.[28]

The charge-coupled device (CCD) is the image-capturing optoelectronic component in first-generation digital cameras. It was invented in 1969 by Willard Boyle and George E. Smith at AT&T Bell Labs as a memory device. The lab was working on the Picturephone and on the development of semiconductor bubble memory. Merging these two initiatives, Boyle and Smith conceived of the design of what they termed “Charge ‘Bubble’ Devices”. The essence of the design was the ability to transfer charge along the surface of a semiconductor. It was Dr. Michael Tompsett from Bell Labs however, who discovered that the CCD could be used as an imaging sensor. The CCD has increasingly been replaced by the active pixel sensor (APS), commonly used in cell phone cameras.
Analog electronic cameras

Sony Mavica, 1981
Main article: Still video camera
Handheld electronic cameras, in the sense of a device meant to be carried and used like a handheld film camera, appeared in 1981 with the demonstration of the Sony Mavica (Magnetic Video Camera). This is not to be confused with the later cameras by Sony that also bore the Mavica name. This was an analog camera, in that it recorded pixel signals continuously, as videotape machines did, without converting them to discrete levels; it recorded television-like signals to a 2 × 2 inch “video floppy”.[19] In essence it was a video movie camera that recorded single frames, 50 per disk in field mode and 25 per disk in frame mode. The image quality was considered equal to that of then-current televisions.

Canon RC-701, 1986
Analog electronic cameras do not appear to have reached the market until 1986 with the Canon RC-701. Canon demonstrated a prototype of this model at the 1984 Summer Olympics, printing the images in the Yomiuri Shinbun, a Japanese newspaper. In the United States, the first publication to use these cameras for real reportage was USA Today, in its coverage of World Series baseball. Several factors held back the widespread adoption of analog cameras; the cost (upwards of $20,000), poor image quality compared to film, and the lack of quality affordable printers. Capturing and printing an image originally required access to equipment such as a frame grabber, which was beyond the reach of the average consumer. The “video floppy” disks later had several reader devices available for viewing on a screen, but were never standardized as a computer drive.

The early adopters tended to be in the news media, where the cost was negated by the utility and the ability to transmit images by telephone lines. The poor image quality was offset by the low resolution of newspaper graphics. This capability to transmit images without a satellite link was useful during the Tiananmen Square protests of 1989 and the first Gulf War in 1991.

US government agencies also took a strong interest in the still video concept, notably the US Navy for use as a real time air-to-sea surveillance system.

The first analog electronic camera marketed to consumers may have been the Casio VS-101 in 1987. A notable analog camera produced the same year was the Nikon QV-1000C, designed as a press camera and not offered for sale to general users, which sold only a few hundred units. It recorded images in greyscale, and the quality in newspaper print was equal to film cameras. In appearance it closely resembled a modern digital single-lens reflex camera. Images were stored on video floppy disks.

Silicon Film, a proposed digital sensor cartridge for film cameras that would allow 35 mm cameras to take digital photographs without modification was announced in late 1998. Silicon Film was to work like a roll of 35 mm film, with a 1.3 megapixel sensor behind the lens and a battery and storage unit fitting in the film holder in the camera. The product, which was never released, became increasingly obsolete due to improvements in digital camera technology and affordability. Silicon Films’ parent company filed for bankruptcy in 2001.[20]

Arrival of true digital cameras

The first portable digital SLR camera, introduced by Minolta in 1995.

Nikon D1, 1999
By the late 1980s, the technology required to produce truly commercial digital cameras existed. The first true portable digital camera that recorded images as a computerized file was likely the Fuji DS-1P of 1988, which recorded to a 2 MB SRAM memory card that used a battery to keep the data in memory. This camera was never marketed to the public.

The first digital camera of any kind ever sold commercially was possibly the MegaVision Tessera in 1987 [21] though there is not extensive documentation of its sale known. The first portable digital camera that was actually marketed commercially was sold in December 1989 in Japan, the DS-X by Fuji[22] The first commercially available portable digital camera in the United States was the Dycam Model 1, first shipped in November 1990.[23] It was originally a commercial failure because it was black and white, low in resolution, and cost nearly $1,000 (about $2000 in 2014).[24] It later saw modest success when it was re-sold as the Logitech Fotoman in 1992. It used a CCD image sensor, stored pictures digitally, and connected directly to a computer for download.[25][26][27]

In 1991, Kodak brought to market the Kodak DCS (Kodak Digital Camera System), the beginning of a long line of professional Kodak DCS SLR cameras that were based in part on film bodies, often Nikons. It used a 1.3 megapixel sensor, had a bulky external digital storage system and was priced at $13,000. At the arrival of the Kodak DCS-200, the Kodak DCS was dubbed Kodak DCS-100.

The move to digital formats was helped by the formation of the first JPEG and MPEG standards in 1988, which allowed image and video files to be compressed for storage. The first consumer camera with a liquid crystal display on the back was the Casio QV-10 developed by a team lead by Hiroyuki Suetaka in 1995. The first camera to use CompactFlash was the Kodak DC-25 in 1996.[citation needed]. The first camera that offered the ability to record video clips may have been the Ricoh RDC-1 in 1995.

In 1995 Minolta introduced the RD-175, which was based on the Minolta 500si SLR with a splitter and three independent CCDs. This combination delivered 1.75M pixels. The benefit of using an SLR base was the ability to use any existing Minolta AF mount lens. 1999 saw the introduction of the Nikon D1, a 2.74 megapixel camera that was the first digital SLR developed entirely from the ground up by a major manufacturer, and at a cost of under $6,000 at introduction was affordable by professional photographers and high-end consumers. This camera also used Nikon F-mount lenses, which meant film photographers could use many of the same lenses they already owned.

Digital camera sales continued to flourish, driven by technology advances. The digital market segmented into different categories, Compact Digital Still Cameras, Bridge Cameras, Mirrorless Compacts and Digital SLRs. One of the major technology advances was the development of CMOS sensors, which helped drive sensor costs low enough to enable the widespread adoption of camera phones.

1973 – Fairchild Semiconductor releases the first large image-capturing CCD chip: 100 rows and 100 columns.[29]
1975 – Bryce Bayer of Kodak develops the Bayer filter mosaic pattern for CCD color image sensors
1986 – Kodak scientists develop the world’s first megapixel sensor.
The web has been a popular medium for storing and sharing photos ever since the first photograph was published on the web by Tim Berners-Lee in 1992 (an image of the CERN house band Les Horribles Cernettes). Today popular sites such as Flickr, Picasa, Instagram and PhotoBucket are used by millions of people to share their pictures.

A Vintage Hat for the Holidays


follow @TheMaryJaneStyle on Twitter &Tumbler
images-62 images-64 A disgraceful act to venture out of the house without a hat or even gloves. One record tells of a young lady venturing out to post a letter without her hat and gloves and being severely reprimanded for not being appropriately dressed. The post box was situated a few yards from her front garden gate. TMJS 24 Vintage Hat



Etiquette and formality have played their part in hat wearing.  At the turn of the 20th century in 1900, both men and women changed their hats dependant on their activity, but for many ladies of some social standing it would be several times a day.

   For hats, bearing in mind that hair was often pinned up, the popular style of hat wear were bonnets and fascinators, something you could pin on to your victory roll. Berets were also popular during the war.

    The snood – made popular by Vivien Leigh, would also create a nice 1940s war look effect to finish off your hairstyle. Just wearing a simple black beret with rolled hair can really give you that 1940s look.

 

1941_hairstyles_small

1942hats_hair_small

1944hats_small

1945_hats_small


1943hats_small

1941_hats_2_small

Gangs_all_here_trailer 1946tailleurbicorns_small 1946tailorpot_small 1946tailleurhatsveils_small

Plumassiers

Running parallel to these hat making arts were feather workshops or more correctly workshops called plumassiers where feathers were dyed and made into arrangements from boas to aigrettes to tufts and sprays for both the worlds of fashion and interiors.  Plumes have always been a status symbol and sign of economic stability.

Fortunes were paid by rich individuals for exotic feathered hats.  Gorgeous feathered hats could command as much as £100 in the early Edwardian era.  The Edwardians were masters in the art of excess and the flamboyant hats of the era are a clear example of this.

At one point whole stuffed birds were used to decorate hats, but as the new more enlightened century emerged, protests were voiced.  In America the Audubon society expressed concern and in England the RSPB (Royal Society for the Protection of Birds) campaigned for ecological understanding.

Eventually plumage pleas were heard and Queen Alexandra forbad the wearing of rare osprey feathers at court so that the osprey bird was not plundered for feathers.  For a few years magazines quietly ignored making reference to feathers on hats as women continued to wear them.  But soon the use of other rare bird feathers was banned and thereafter only farmed feathers could be used and only from specific birds.


For A Gentleman

Fun Hat 1940’s

A fashion report in Los Angeles Times from 1895 called the use of mendiant the “newest trimming” for hats, and noted that hats were “tipped far over the eyes”. The Chicago Tribune reported on fruit ribbons, along with feathers, flowers, and frills, as trim for Easter hats. A report on artificial fruit used on hats was in a 1918 edition of the New York Times. Fruit and vegetable trim on “gay hats” featured in the first millinery show of the season at New York’s Saks Fifth Avenue in 1941, and overshadowed flowers. Mendicant is a traditional French confection usually prepared during the Christmas season, and composed of a chocolate disk studded with nuts and dried fruits representing the four mendicant or monastic orders of the Dominicans, Augustinians, Franciscans and Carmelites, where the color of the nuts and dried fruits is used refer to the color of monastic robes. Tradition dictates that raisins are used for the Dominicans, hazelnut for the Augustins, dried fig for Franciscans and almond for Carmelite. Lil Picard, a millinery designer for the custom-made department of Bloomingdale’s, sought inspiration from nature for her hats and while on vacation “listening to the birds, gazing through the lacy outlines of foliage and watching the ripening fruits, she dreamed of trimmings.”

Perfect, BackSeams…”HowTo”

IMG_20140927_045303

Seamed: Stockings manufactured in the old Full-Fashioned manner with a seam running up the back of the leg. In the past they were manufactured by cutting the fabric and then sewing it together. Today stockings are generally fully knitted and a fake or mock seam is added up the back for a particular fashion look. Some brands also produce seamed hold-ups.
images-21

Hosiery, also referred to as legwear, describes garments worn directly on the feet and legs. The term originated as the collective term for products of which a maker or seller is termed a hosier; and those products are also known generically as hose. The term is also used for all types of knitted fabric, and its thickness and weight is defined in terms of denier or opacity. Lower denier measurements of 5 to 15 describe a hose which may be sheer in appearance, whereas styles of 40 and above are dense, with little to no light able to come through on 100 denier items.

The first references to hosiery can be found in works of Hesiod, where Romans are said to have used leather or cloth in forms of strips to cover their lower body parts. Even the Egyptians are speculated to have used hosiery as socks have been found in certain tombs.

images-22
Before the 1920s, women’s stockings, if worn, were worn for warmth. In the 1920s, as hemlines of women’s dresses rose, women began to wear stockings to cover the exposed legs. These stockings were sheer, first made of silk or rayon (then known as “artificial silk”), and after 1940 of nylon.
images-32

images-33

images-34

images-35

Paint-on Hosiery During the War Years

A back “seam” drawn with an eyebrow pencil topped off the resourceful fashion effect
So it’s Saturday night in 1941, and you want to wear stockings with your cocktail dress, but the new wonder material nylon has been rationed for the war effort and has disappeared from department store shelves. What do you do in such times of patriotic privation? You get resourceful, and cover your legs with a layer of nude-colored makeup, and line the back of each leg with a trompe l’oeil seam.

Last week, in the first post from the Stocking Series, we heard about the huge reception of nylon hosiery. On May 16, 1940, officially called “Nylon Day,” four million pairs of nylons landed in stores and sold out within two days! But only a year later, the revolutionary product became scarce when the World War II economy directed all nylon into manufacturing parachutes, rope and netting.
Having trouble with your seam? No problem! This contraption, made from a screwdriver handle, bicycle leg clip and an ordinary eyebrow pencil would do the trick!

images-36

images-38
images-42


images-52

 

“Script Memorial Day Blackhawk BOW Lip’s”


Ep 22 TheMaryJaneStyle from TheMaryJaneStyle on Vimeo.

How To Bow lips , first cover lips with your base and powder then with lip pencil draw your cupids bow on top lip and then color in lips after with a lip brush add lipstick …Ta Daaaaaa…great mj pic


bow clara

images