Vintage Stockings Ep.28




IMG_20151016_002343

2015-10-12_06.48.06


tmjs 28 tn

tmjs 28 tn

TheMaryJaneStyle Ep.27 Ch.1-3 “History of Cameras”

tmjs ch1
ch 1 history

The history of the camera can be traced much further back than the introduction of photography. Cameras evolved from the camera obscura, and continued to change through many generations of photographic technology, including daguerreotypes, calotypes, dry plates, film, and digital cameras.

The camera obscura
A camera obscura (Latin: “dark chamber”) is an optical device that led to photography and the photographic camera. The device consists of a box or room with a hole in one side. Light from an external scene passes through the hole and strikes a surface inside, where it is reproduced, rotated 180 degrees (thus upside-down), but with color and perspective preserved. The image can be projected onto paper, and can then be traced to produce a highly accurate representation. The largest camera obscura in the world is on Constitution Hill in Aberystwyth, Wales.[1]

Using mirrors, as in an 18th-century overhead version, it is possible to project a right-side-up image. Another more portable type is a box with an angled mirror projecting onto tracing paper placed on the glass top, the image being upright as viewed from the back.

As the pinhole is made smaller, the image gets sharper, but the projected image becomes dimmer. With too small a pinhole, however, the sharpness worsens, due to diffraction. Most practical camera obscuras use a lens rather than a pinhole (as in a pinhole camera) because it allows a larger aperture, giving a usable brightness while maintaining focus.
An artist using an 18th-century camera obscura to trace an image
Photographic cameras were a development of the camera obscura, a device possibly dating back to the ancient Chinese[1] and ancient Greeks,[2][3] which uses a pinhole or lens to project an image of the scene outside upside-down onto a viewing surface.

An Arab physicist, Ibn al-Haytham, published his Book of Optics in 1021 AD. He created the first pinhole camera after observing how light traveled through a window shutter. Ibn al-Haytham realized that smaller holes would create sharper images. Ibn al-Haytham is also credited with inventing the first camera obscura.[4]

On 24 January 1544 mathematician and instrument maker Reiners Gemma Frisius of Leuven University used one to watch a solar eclipse, publishing a diagram of his method in De Radio Astronimica et Geometrico in the following year.[5] In 1558 Giovanni Batista della Porta was the first to recommend the method as an aid to drawing.[6]
Early fixed images
The first partially successful photograph of a camera image was made in approximately 1816 by Nicéphore Niépce,[7][8] using a very small camera of his own making and a piece of paper coated with silver chloride, which darkened where it was exposed to light. No means of removing the remaining unaffected silver chloride was known to Niépce, so the photograph was not permanent, eventually becoming entirely darkened by the overall exposure to light necessary for viewing it. In the mid-1820s, Niépce used a sliding wooden box camera made by Parisian opticians Charles and Vincent Chevalier to experiment with photography on surfaces thinly coated with Bitumen of Judea.[9] The bitumen slowly hardened in the brightest areas of the image. The unhardened bitumen was then dissolved away. One of those photographs has survived.
Before the invention of photographic processes there was no way to preserve the images produced by these cameras apart from manually tracing them. The earliest cameras were room-sized, with space for one or more people inside; these gradually evolved into more and more compact models such as that by Niépce’s time portable handheld cameras suitable for photography were readily available. The first camera that was small and portable enough to be practical for photography was envisioned by Johann Zahn in 1685, though it would be almost 150 years before such an application was possible.
The history of photography has roots in remote antiquity with the discovery of the principle of the camera obscura and the observation that some substances are visibly altered by exposure to light. As far as is known, nobody thought of bringing these two phenomena together to capture camera images in permanent form until around 1800, when Thomas Wedgwood made the first reliably documented although unsuccessful attempt. In the mid-1820s, Nicéphore Niépce succeeded, but several days of exposure in the camera were required and the earliest results were very crude. Niépce’s associate Louis Daguerre went on to develop the daguerreotype process, the first publicly announced photographic process, which required only minutes of exposure in the camera and produced clear, finely detailed results. It was commercially introduced in 1839, a date generally accepted as the birth year of practical photography.[1]
Daguerreotypes and calotypes
After Niépce’s death in 1833, his partner Louis Daguerre continued to experiment and by 1837 had created the first practical photographic process, which he named the daguerreotype and publicly unveiled in 1839.[10] Daguerre treated a silver-plated sheet of copper with iodine vapor to give it a coating of light-sensitive silver iodide. After exposure in the camera, the image was developed by mercury vapor and fixed with a strong solution of ordinary salt (sodium chloride). Henry Fox Talbot perfected a different process, the calotype, in 1840. As commercialized, both processes used very simple cameras consisting of two nested boxes. The rear box had a removable ground glass screen and could slide in and out to adjust the focus. After focusing, the ground glass was replaced with a light-tight holder containing the sensitized plate or paper and the lens was capped. Then the photographer opened the front cover of the holder, uncapped the lens, and counted off as many seconds—or minutes—as the lighting conditions seemed to require before replacing the cap and closing the holder. Despite this mechanical simplicity, high-quality achromatic lenses were standard.[11]
Daguerreotype

Daguerreotype of Louis Daguerre in 1844 by Jean-Baptiste Sabatier-Blot
The daguerreotype (/dəˈɡɛrɵtaɪp/; French: daguerréotype) process, or daguerreotypy, was the first publicly announced photographic process, and for nearly twenty years, it was the one most commonly used. It was invented by Louis-Jaques-Mandé Daguerre and introduced worldwide in 1839.[1][2][3] By 1860, new processes which were less expensive and produced more easily viewed images had almost completely replaced it. During the past few decades, there has been a small-scale revival of daguerreotypy among photographers interested in making artistic use of early photographic processes.

To make a daguerreotype, the daguerreotypist polished a sheet of silver-plated copper to a mirror finish; treated it with fumes that made its surface light-sensitive; exposed it in a camera for as long as was judged to be necessary, which could be as little as a few seconds for brightly sunlit subjects or much longer with less intense lighting; made the resulting latent image on it visible by fuming it with mercury vapor; removed its sensitivity to light by liquid chemical treatment; rinsed and dried it; then sealed the easily marred result behind glass in a protective enclosure.

Viewing a daguerreotype is unlike looking at any other type of photograph. The image does not sit on the surface of the metal, but appears to be floating in space, and the illusion of reality, especially with examples that are sharp and well exposed is unique to the process.

The image is on a mirror-like silver surface, normally kept under glass, and will appear either positive or negative, depending on the angle at which it is viewed, how it is lit and whether a light or dark background is being reflected in the metal. The darkest areas of the image are simply bare silver; lighter areas have a microscopically fine light-scattering texture. The surface is very delicate, and even the lightest wiping can permanently scuff it. Some tarnish around the edges is normal, and any treatment to remove it should be done only by a specialized restorer.

Several types of antique photographs, most often ambrotypes and tintypes, but sometimes even old prints on paper, are very commonly misidentified as daguerreotypes, especially if they are in the small, ornamented cases in which daguerreotypes made in the US and UK were usually housed. The name “daguerreotype” correctly refers only to one very specific image type and medium, the product of a process that was in wide use only from the early 1840s to the late 1850s.

History
Since the Renaissance era, artists and inventors had searched for a mechanical method of capturing visual scenes.[4] Previously, using the camera obscura, artists would manually trace what they saw, or use the optical image in the camera as a basis for solving the problems of perspective and parallax, and deciding color values. The camera obscura’s optical reduction of a real scene in three-dimensional space to a flat rendition in two dimensions influenced western art, so that at one point, it was thought that images based on optical geometry (perspective) belonged to a more advanced civilization. Later, with the advent of Modernism, the absence of perspective in oriental art from China, Japan and in Persian miniatures was revalued.

In the early seventeenth century, the Italian physician and chemist Angelo Sala wrote that powdered silver nitrate was blackened by the sun, but did not find any practical application of the phenomenon.

Previous discoveries of photosensitive methods and substances—including silver nitrate by Albertus Magnus in the 13th century,[5] a silver and chalk mixture by Johann Heinrich Schulze in 1724,[6][7] and Joseph Niépce’s bitumen-based heliography in 1822 contributed to development of the daguerreotype.[4][8]

The first reliably documented attempt to capture the image formed in a camera obscura was made by Thomas Wedgwood as early as the 1790s, but according to an 1802 account of his work by Sir Humphry Davy:

“The images formed by means of a camera obscura have been found too faint to produce, in any moderate time, an effect upon the nitrate of silver. To copy these images was the first object of Mr. Wedgwood in his researches on the subject, and for this purpose he first used the nitrate of silver, which was mentioned to him by a friend, as a substance very sensible to the influence of light; but all his numerous experiments as to their primary end proved unsuccessful.”[9]

Development in France
In 1829 French artist and chemist Louis Jacques-Mandé Daguerre, contributing a cutting edge camera design, partnered with Niépce, a leader in photochemistry, to further develop their technologies.[4] The two men came into contact through their optician, Chevalier, who supplied lenses for their camera obscuras.

Niépce’s aim originally had been to find a method to reproduce prints and drawings for lithography. He had started out experimenting with light sensitive materials and had made a contact print from a drawing and then went on to successfully make the first photomechanical record of an image in a camera obscura—the world’s first photograph. Niépce’s method was to coat a pewter plate with bitumen of Judea (asphalt) and the action of the light differentially hardened the bitumen. The plate was washed with a mixture of oil of lavender and turpentine leaving a relief image. Niépce called his process heliography and the exposure for the first successful photograph was eight hours.

Early experiments required hours of exposure in the camera to produce visible results. Modern photo-historians consider the stories of Daguerre discovering mercury development by accident because of a bowl of mercury left in a cupboard, or, alternatively, a broken thermometer to be spurious.[10] However, there is another story of a fortunate accident, related by Louis Figuier of a silver spoon lying on an iodized silver plate which left its design on the plate by light perfectly.[11] Noticing this, Daguerre wrote to Niépce on 21 May 1831 suggesting the use of iodized silver plates as a means of obtaining light images in the camera. Letters from Niépce to Daguerre dated 24 June and 8 November 1831, show that Niépce was unsuccessful in obtaining satisfactory results following Daguerre’s suggestion, although he had produced a negative on an iodized silver plate in the camera. Niépce’s letters to Daguerre dated 29 January and 3 March 1832 show that the use of iodized silver plates was due to Daguerre and not Niépce.[12]

Jean-Baptiste Dumas, who was president of the National Society for the Encouragement of Science[13] and a chemist, put his laboratory at Daguerre’s disposal. According to Austrian chemist Josef Maria Eder, Daguerre was not versed in chemistry and it was Dumas who suggested Daguerre use sodium hyposulfite, discovered by Herschel in 1819, as a fixer to dissolve the unexposed silver salts.[7][12]

First mention in print (1835) and public announcement (1839)
At the end of a review of one of Daguerre’s Diorama spectacles in the Journal des artistes on 27 September 1835.[14] a Diorama painting of a landslide that occurred in “La Vallée de Goldau” a paragraph tacked on to the end of the review made passing mention of rumour that was going around the Paris studios of Daguerre’s attempts to make a visual record on metal plates of the fleeting image produced by the camera obscura:

“It is said that Daguerre has found the means to collect, on a plate prepared by him, the image produced by the camera obscura, in such a way that a portrait, a landscape, or any view, projected upon this plate by the ordinary camera obscura, leaves an imprint in light and shade there, and thus presents the most perfect of all drawings … a preparation put over this image preserves it for an indefinite time … the physical sciences have perhaps never presented a marvel comparable to this one.”[15]

A further clue to fixing the date of invention of the process is that when the Paris correspondent of the London periodical The Athenaeum reported the public announcement of the daguerreotype in 1839, he mentioned that the daguerreotypes now being produced were considerably better than the ones he had seen “four years earlier”.

François Arago announced the daguerreotype process at a joint meeting of the French Academy of Sciences and the Académie des Beaux-Arts on 9 January 1839. Daguerre was present, but complained of a sore throat. Later that year William Fox Talbot announced his silver chloride “sensitive paper” process.[16] Together, these announcements cause commentators to choose the 1839 as the year photography was born, or made public, although of course Daguerre had been producing daguerreotypes since 1835 and kept the process secret. [17]

Daguerre and Niépce had together signed an agreement in which remuneration for the invention would be paid for by subscription. However, the campaign they launched to finance the invention failed. François Arago, whose views on the system of patenting inventions can be gathered from speeches he made later in the House of Deputies (he apparently thought the English patent system had advantages over the French one).

Daguerre did not patent and profit from his invention in the usual way. Instead, it was arranged that the French government would acquire the rights in exchange for a lifetime pension. The government would then present the daguerreotype process “free to the world” as a gift, which it did on 19 August 1839. However, five days previously to this, Miles Berry, a patent agent acting on Daguerre’s behalf filed for patent No. 8194 of 1839: “A New or Improved Method of Obtaining the Spontaneous Reproduction of all the Images Received in the Focus of the Camera Obscura.” The patent applied to “England, Wales, and the town of Berwick-upon-Tweed, and in all her Majesty’s Colonies and Plantations abroad.”[18][19] This was the usual wording of English patent specifications before 1852. It was only after the 1852 Act, which unified the patent systems of England, Ireland and Scotland, that a single patent protection was automatically extended to the whole of the British Isles, including the Channel Isles and the Isle of Man. Richard Beard bought the patent rights from Miles Berry, and also obtained a Scottish patent, which he apparently did not enforce. The United Kingdom and the “Colonies and Plantations abroad” therefore became the only places where a license was legally required to make and sell daguerreotypes.[19][20]

Much of Daguerre’s early work was destroyed when his home and studio caught fire on 8 March 1839, while the painter Samuel Morse was visiting from the US.[21][page needed] Malcolm Daniel points out that “fewer than twenty-five securely attributed photographs by Daguerre survive—a mere handful of still lifes, Parisian views, and portraits from the dawn of photography.”[22]

Calotype or talbotype is an early photographic process introduced in 1841 by William Henry Fox Talbot,[1] using paper[2] coated with silver iodide. The term calotype comes from the Greek καλός (kalos), “beautiful”, and τύπος (tupos), “impression”.

Late 19th century studio camera
Dry plates
Collodion dry plates had been available since 1855, thanks to the work of Désiré van Monckhoven, but it was not until the invention of the gelatin dry plate in 1871 by Richard Leach Maddox that the wet plate process could be rivaled in quality and speed. The 1878 discovery that heat-ripening a gelatin emulsion greatly increased its sensitivity finally made so-called “instantaneous” snapshot exposures practical. For the first time, a tripod or other support was no longer an absolute necessity. With daylight and a fast plate or film, a small camera could be hand-held while taking the picture. The ranks of amateur photographers swelled and informal “candid” portraits became popular. There was a proliferation of camera designs, from single- and twin-lens reflexes to large and bulky field cameras, simple box cameras, and even “detective cameras” disguised as pocket watches, hats, or other objects.

The short exposure times that made candid photography possible also necessitated another innovation, the mechanical shutter. The very first shutters were separate accessories, though built-in shutters were common by the end of the 19th century.[11]

Kodak and the birth of film

Kodak No. 2 Brownie box camera, circa 1910
The use of photographic film was pioneered by George Eastman, who started manufacturing paper film in 1885 before switching to celluloid in 1889. His first camera, which he called the “Kodak,” was first offered for sale in 1888. It was a very simple box camera with a fixed-focus lens and single shutter speed, which along with its relatively low price appealed to the average consumer. The Kodak came pre-loaded with enough film for 100 exposures and needed to be sent back to the factory for processing and reloading when the roll was finished. By the end of the 19th century Eastman had expanded his lineup to several models including both box and folding cameras.

In 1900, Eastman took mass-market photography one step further with the Brownie, a simple and very inexpensive box camera that introduced the concept of the snapshot. The Brownie was extremely popular and various models remained on sale until the 1960s.

Film also allowed the movie camera to develop from an expensive toy to a practical commercial tool.

Despite the advances in low-cost photography made possible by Eastman, plate cameras still offered higher-quality prints and remained popular well into the 20th century. To compete with rollfilm cameras, which offered a larger number of exposures per loading, many inexpensive plate cameras from this era were equipped with magazines to hold several plates at once. Special backs for plate cameras allowing them to use film packs or rollfilm were also available, as were backs that enabled rollfilm cameras to use plates.

Except for a few special types such as Schmidt cameras, most professional astrographs continued to use plates until the end of the 20th century when electronic photography replaced them.
The metal-based daguerreotype process soon had some competition from the paper-based calotype negative and salt print processes invented by Henry Fox Talbot. Subsequent innovations reduced the required camera exposure time from minutes to seconds and eventually to a small fraction of a second; introduced new photographic media which were more economical, sensitive or convenient, including roll films for casual use by amateurs; and made it possible to take pictures in natural color as well as in black-and-white.

The commercial introduction of computer-based electronic digital cameras in the 1990s soon revolutionized photography. During the first decade of the 21st century, traditional film-based photochemical methods were increasingly marginalized as the practical advantages of the new technology became widely appreciated and the image quality of moderately priced digital cameras was continually improved.

Etymology
The coining of the word “photography” is usually attributed to Sir John Herschel in 1839. It is based on the Greek φῶς (phos), (genitive: phōtós) meaning “light”, and γραφή (graphê), meaning “drawing, writing”, together meaning “drawing with light”.[2]

Technological background

A camera obscura used for drawing images
Photography is the result of combining several different technical discoveries. Long before the first photographs were made, Chinese philosopher Mo Ti and Greek mathematicians Aristotle and Euclid described a pinhole camera in the 5th and 4th centuries BCE.[3][4] In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments[5]

Ibn al-Haytham (Alhazen) (965 in Basra – c. 1040 in Cairo) studied the camera obscura and pinhole camera,[4][6] Albertus Magnus (1193/1206–80) discovered silver nitrate, and Georges Fabricius (1516–71) discovered silver chloride. Daniel Barbaro described a diaphragm in 1568. Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694. The novel Giphantie (by the French Tiphaigne de la Roche, 1729–74) described what could be interpreted as photography.

Development of chemical photography
Monochrome process

Earliest known surviving heliographic engraving, 1825, printed from a metal plate made by Joseph Nicéphore Niépce with his “heliographic process”.[7] The plate was exposed under an ordinary engraving and copied it by photographic means. This was a step towards the first permanent photograph from nature taken with a camera obscura.
Around the year 1800, Thomas Wedgwood made the first known attempt to capture the image in a camera obscura by means of a light-sensitive substance. He used paper or white leather treated with silver nitrate. Although he succeeded in capturing the shadows of objects placed on the surface in direct sunlight, and even made shadow-copies of paintings on glass, it was reported in 1802 that “[t]he images formed by means of a camera obscura have been found too faint to produce, in any moderate time, an effect upon the nitrate of silver.” The shadow images eventually darkened all over because “[n]o attempts that have been made to prevent the uncoloured part of the copy or profile from being acted upon by light have as yet been successful.”[8] Wedgwood may have prematurely abandoned his experiments due to frail and failing health; he died aged 34 in 1805.

“Boulevard du Temple”, a daguerreotype made by Louis Daguerre in 1838, is generally accepted as the earliest photograph to include people. It is a view of a busy street, but because the exposure time was at least ten minutes the moving traffic left no trace. Only the two men near the bottom left corner, one apparently having his boots polished by the other, stayed in one place long enough to be visible.
In 1816 Nicéphore Niépce, using paper coated with silver chloride, succeeded in photographing the images formed in a small camera, but the photographs were negatives, darkest where the camera image was lightest and vice versa, and they were not permanent in the sense of being reasonably light-fast; like earlier experimenters, Niépce could find no way to prevent the coating from darkening all over when it was exposed to light for viewing. Disenchanted with silver salts, he turned his attention to light-sensitive organic substances.[9]

Robert Cornelius, self-portrait, Oct. or Nov. 1839, approximate quarter plate daguerreotype. The back reads, “The first light picture ever taken.”

One of the oldest photographic portraits known, made by Joseph Draper of New York, in 1839[10] or 1840, of his sister, Dorothy Catherine Draper.
The oldest surviving permanent photograph of the image formed in a camera was created by Niépce in 1826 or 1827.[1] It was made on a polished sheet of pewter and the light-sensitive substance was a thin coating of bitumen, a naturally occurring petroleum tar, which was dissolved in lavender oil, applied to the surface of the pewter and allowed to dry before use.[11] After a very long exposure in the camera (traditionally said to be eight hours, but in fact probably several days),[12] the bitumen was sufficiently hardened in proportion to its exposure to light that the unhardened part could be removed with a solvent, leaving a positive image with the light regions represented by hardened bitumen and the dark regions by bare pewter.[11] To see the image plainly, the plate had to be lit and viewed in such a way that the bare metal appeared dark and the bitumen relatively light.[9]

In partnership, Niépce (in Chalon-sur-Saône) and Louis Daguerre (in Paris) refined the bitumen process,[13] substituting a more sensitive resin and a very different post-exposure treatment that yielded higher-quality and more easily viewed images. Exposure times in the camera, although somewhat reduced, were still measured in hours.[9]

In 1833 Niépce died suddenly, leaving his notes to Daguerre. More interested in silver-based processes than Niépce had been, Daguerre experimented with photographing camera images directly onto a mirror-like silver-surfaced plate that had been fumed with iodine vapor, which reacted with the silver to form a coating of silver iodide. As with the bitumen process, the result appeared as a positive when it was suitably lit and viewed. Exposure times were still impractically long until Daguerre made the pivotal discovery that an invisibly slight or “latent” image produced on such a plate by a much shorter exposure could be “developed” to full visibility by mercury fumes. This brought the required exposure time down to a few minutes under optimum conditions. A strong hot solution of common salt served to stabilize or fix the image by removing the remaining silver iodide. On 7 January 1839, this first complete practical photographic process was announced at a meeting of the French Academy of Sciences,[14] and the news quickly spread. At first, all details of the process were withheld and specimens were shown only at Daguerre’s studio, under his close supervision, to Academy members and other distinguished guests.[15] Arrangements were made for the French government to buy the rights in exchange for pensions for Niépce’s son and Daguerre and present the invention to the world (with the de facto exception of Great Britain) as a free gift.[16] Complete instructions were published on 19 August 1839.[17]

After reading early reports of Daguerre’s invention, William Henry Fox Talbot, who had succeeded in creating stabilized photographic negatives on paper in 1835, worked on perfecting his own process. In early 1839 he acquired a key improvement, an effective fixer, from John Herschel, the astronomer, who had previously shown that hyposulfite of soda (commonly called “hypo” and now known formally as sodium thiosulfate) would dissolve silver salts.[18] News of this solvent also reached Daguerre, who quietly substituted it for his less effective hot salt water treatment.[19]

A calotype print showing the American photographer Frederick Langenheim (circa 1849). Note, the caption on the photo calls the process Talbotype
Talbot’s early silver chloride “sensitive paper” experiments required camera exposures of an hour or more. In 1840, Talbot invented the calotype process, which, like Daguerre’s process, used the principle of chemical development of a faint or invisible “latent” image to reduce the exposure time to a few minutes. Paper with a coating of silver iodide was exposed in the camera and developed into a translucent negative image. Unlike a daguerreotype, which could only be copied by rephotographing it with a camera, a calotype negative could be used to make a large number of positive prints by simple contact printing. The calotype had yet another distinction compared to other early photographic processes, in that the finished product lacked fine clarity due to its translucent paper negative. This was seen as a positive attribute for portraits because it softened the appearance of the human face. Talbot patented this process,[20] which greatly limited its adoption, and spent many years pressing lawsuits against alleged infringers. He attempted to enforce a very broad interpretation of his patent, earning himself the ill will of photographers who were using the related glass-based processes later introduced by other inventors, but he was eventually defeated. Nonetheless, Talbot’s developed-out silver halide negative process is the basic technology used by chemical film cameras today. Hippolyte Bayard had also developed a method of photography but delayed announcing it, and so was not recognized as its inventor.

In 1839, John Herschel made the first glass negative, but his process was difficult to reproduce. Slovene Janez Puhar invented a process for making photographs on glass in 1841; it was recognized on June 17, 1852 in Paris by the Académie Nationale Agricole, Manufacturière et Commerciale.[21] In 1847, Nicephore Niépce’s cousin, the chemist Niépce St. Victor, published his invention of a process for making glass plates with an albumen emulsion; the Langenheim brothers of Philadelphia and John Whipple and William Breed Jones of Boston also invented workable negative-on-glass processes in the mid-1840s.[22]

In 1851 Frederick Scott Archer invented the collodion process.[citation needed] Photographer and children’s author Lewis Carroll used this process. (Carroll refers to the process as “Tablotype” [sic] in the story “A Photographer’s Day Out”)[23]

Roger Fenton’s assistant seated on Fenton’s photographic van, Crimea, 1855.
Herbert Bowyer Berkeley experimented with his own version of collodion emulsions after Samman introduced the idea of adding dithionite to the pyrogallol developer.[citation needed] Berkeley discovered that with his own addition of sulfite, to absorb the sulfur dioxide given off by the chemical dithionite in the developer, that dithionite was not required in the developing process. In 1881 he published his discovery. Berkeley’s formula contained pyrogallol, sulfite and citric acid. Ammonia was added just before use to make the formula alkaline. The new formula was sold by the Platinotype Company in London as Sulpho-Pyrogallol Developer.[24]

Nineteenth-century experimentation with photographic processes frequently became proprietary. The German-born, New Orleans photographer Theodore Lilienthal successfully sought legal redress in an 1881 infringement case involving his “Lambert Process” in the Eastern District of Louisiana.

Popularization

General view of The Crystal Palace at Sydenham by Philip Henry Delamotte, 1854

Mid 19th century “Brady stand” photo model’s armrest table, meant to keep portrait models more still during long exposure times (studio equipment nicknamed after the famed US photographer, Mathew Brady)

1855 cartoon satirizing problems with posing for Daguerreotypes: slight movement during exposure resulted in blurred features, red-blindness made rosy complexions dark.

A photographer appears to be photographing himself in a 19th-century photographic studio. Note clamp to hold the poser’s head still. An 1893 satire on photographic procedures already becoming obsolete at the time.

A comparison of common print sizes used in photographic studios during the 19th century
The daguerreotype proved popular in response to the demand for portraiture that emerged from the middle classes during the Industrial Revolution.[citation needed] This demand, which could not be met in volume and in cost by oil painting, added to the push for the development of photography.

In 1847, Count Sergei Lvovich Levitsky designed a bellows camera that significantly improved the process of focusing. This adaptation influenced the design of cameras for decades and is still found in use today in some professional cameras. While in Paris, Levitsky would become the first to introduce interchangeable decorative backgrounds in his photos, as well as the retouching of negatives to reduce or eliminate technical deficiencies.[citation needed] Levitsky was also the first photographer to portray a photo of a person in different poses and even in different clothes (for example, the subject plays the piano and listens to himself).[citation needed]

Roger Fenton and Philip Henry Delamotte helped popularize the new way of recording events, the first by his Crimean war pictures, the second by his record of the disassembly and reconstruction of The Crystal Palace in London. Other mid-nineteenth-century photographers established the medium as a more precise means than engraving or lithography of making a record of landscapes and architecture: for example, Robert Macpherson’s broad range of photographs of Rome, the interior of the Vatican, and the surrounding countryside became a sophisticated tourist’s visual record of his own travels.

By 1849, images captured by Levitsky on a mission to the Caucasus were exhibited by the famous Parisian optician Chevalier at the Paris Exposition of the Second Republic as an advertisement of their lenses. These photos would receive the Exposition’s gold medal; the first time a prize of its kind had ever been awarded to a photograph.[citation needed]

That same year in 1849 in his St. Petersburg, Russia studio Levitsky would first propose the idea to artificially light subjects in a studio setting using electric lighting along with daylight. He would say of its use, “as far as I know this application of electric light has never been tried; it is something new, which will be accepted by photographers because of its simplicity and practicality”.[citation needed]

In 1851, at an exhibition in Paris, Levitsky would win the first ever gold medal awarded for a portrait photograph.[citation needed]

In America, by 1851 a broadside by daguerreotypist Augustus Washington was advertising prices ranging from 50 cents to $10.[25] However, daguerreotypes were fragile and difficult to copy. Photographers encouraged chemists to refine the process of making many copies cheaply, which eventually led them back to Talbot’s process.

Ultimately, the photographic process came about from a series of refinements and improvements in the first 20 years. In 1884 George Eastman, of Rochester, New York, developed dry gel on paper, or film, to replace the photographic plate so that a photographer no longer needed to carry boxes of plates and toxic chemicals around. In July 1888 Eastman’s Kodak camera went on the market with the slogan “You press the button, we do the rest”. Now anyone could take a photograph and leave the complex parts of the process to others, and photography became available for the mass-market in 1901 with the introduction of the Kodak Brownie.

Color photography

The first durable color photograph, taken by Thomas Sutton in 1861
A practical means of color photography was sought from the very beginning. Results were demonstrated by Edmond Becquerel as early as 1848, but exposures lasting for hours or days were required and the captured colors were so light-sensitive they would only bear very brief inspection in dim light.

The first durable color photograph was a set of three black-and-white photographs taken through red, green and blue color filters and shown superimposed by using three projectors with similar filters. It was taken by Thomas Sutton in 1861 for use in a lecture by the Scottish physicist James Clerk Maxwell, who had proposed the method in 1855.[26] The photographic emulsions then in use were insensitive to most of the spectrum, so the result was very imperfect and the demonstration was soon forgotten. Maxwell’s method is now most widely known through the early 20th century work of Sergei Prokudin-Gorskii. It was made practical by Hermann Wilhelm Vogel’s 1873 discovery of a way to make emulsions sensitive to the rest of the spectrum, gradually introduced into commercial use beginning in the mid-1880s.

Two French inventors, Louis Ducos du Hauron and Charles Cros, working unknown to each other during the 1860s, famously unveiled their nearly identical ideas on the same day in 1869. Included were methods for viewing a set of three color-filtered black-and-white photographs in color without having to project them, and for using them to make full-color prints on paper.[27]

The first widely used method of color photography was the Autochrome plate, commercially introduced in 1907. It was based on one of Louis Ducos du Hauron’s ideas: instead of taking three separate photographs through color filters, take one through a mosaic of tiny color filters overlaid on the emulsion and view the results through an identical mosaic. If the individual filter elements were small enough, the three primary colors would blend together in the eye and produce the same additive color synthesis as the filtered projection of three separate photographs. Autochrome plates had an integral mosaic filter layer composed of millions of dyed potato starch grains. Reversal processing was used to develop each plate into a transparent positive that could be viewed directly or projected with an ordinary projector. The mosaic filter layer absorbed about 90 percent of the light passing through, so a long exposure was required and a bright projection or viewing light was desirable. Competing screen plate products soon appeared and film-based versions were eventually made. All were expensive and until the 1930s none was “fast” enough for hand-held snapshot-taking, so they mostly served a niche market of affluent advanced amateurs.
35 mm

Leica I, 1925

Argus C3, 1939
See also: History of 135 film
A number of manufacturers started to use 35mm film for still photography between 1905 and 1913. The first 35mm cameras available to the public, and reaching significant numbers in sales were the Tourist Multiple, in 1913, and the Simplex, in 1914.[citation needed]

Oskar Barnack, who was in charge of research and development at Leitz, decided to investigate using 35 mm cine film for still cameras while attempting to build a compact camera capable of making high-quality enlargements. He built his prototype 35 mm camera (Ur-Leica) around 1913, though further development was delayed for several years by World War I. It wasn’t until after World War I that Leica commercialized their first 35mm Cameras. Leitz test-marketed the design between 1923 and 1924, receiving enough positive feedback that the camera was put into production as the Leica I (for Leitz camera) in 1925. The Leica’s immediate popularity spawned a number of competitors, most notably the Contax (introduced in 1932), and cemented the position of 35 mm as the format of choice for high-end compact cameras.

Kodak got into the market with the Retina I in 1934, which introduced the 135 cartridge used in all modern 35 mm cameras. Although the Retina was comparatively inexpensive, 35 mm cameras were still out of reach for most people and rollfilm remained the format of choice for mass-market cameras. This changed in 1936 with the introduction of the inexpensive Argus A and to an even greater extent in 1939 with the arrival of the immensely popular Argus C3. Although the cheapest cameras still used rollfilm, 35 mm film had come to dominate the market by the time the C3 was discontinued in 1966.

The fledgling Japanese camera industry began to take off in 1936 with the Canon 35 mm rangefinder, an improved version of the 1933 Kwanon prototype. Japanese cameras would begin to become popular in the West after Korean War veterans and soldiers stationed in Japan brought them back to the United States and elsewhere.
A new era in color photography began with the introduction of Kodachrome film, available for 16 mm home movies in 1935 and 35 mm slides in 1936. It captured the red, green and blue color components in three layers of emulsion. A complex processing operation produced complementary cyan, magenta and yellow dye images in those layers, resulting in a subtractive color image. Maxwell’s method of taking three separate filtered black-and-white photographs continued to serve special purposes into the 1950s and beyond, and Polachrome, an “instant” slide film that used the Autochrome’s additive principle, was available until 2003, but the few color print and slide films still being made in 2015 all use the multilayer emulsion approach pioneered by Kodachrome.
Digital cameras
See also: Dslr § History
Digital cameras differ from their analog predecessors primarily in that they do not use film, but capture and save photographs on digital memory cards or internal storage instead. Their low operating costs have relegated chemical cameras to niche markets. Digital cameras now include wireless communication capabilities (for example Wi-Fi or Bluetooth) to transfer, print or share photos, and are commonly found on mobile phones.

Early development
The concept of digitizing images on scanners, and the concept of digitizing video signals, predate the concept of making still pictures by digitizing signals from an array of discrete sensor elements. Early spy satellites used the extremely complex and expensive method of de-orbit and airborne retrieval of film canisters. Technology was pushed to skip these steps through the use of in-satellite developing and electronic scanning of the film for direct transmission to the ground. The amount of film was still a major limitation, and this was overcome and greatly simplified by the push to develop an electronic image capturing array that could be used instead of film. The first electronic imaging satellite was the KH-11 launched by the NRO in late 1976. It had a charge-coupled device (CCD) array with a resolution of 800 x 800 pixels (0.64 megapixels).[13] At Philips Labs in New York, Edward Stupp, Pieter Cath and Zsolt Szilagyi filed for a patent on “All Solid State Radiation Imagers” on 6 September 1968 and constructed a flat-screen target for receiving and storing an optical image on a matrix composed of an array of photodiodes connected to a capacitor to form an array of two terminal devices connected in rows and columns. Their US patent was granted on 10 November 1970.[14] Texas Instruments engineer Willis Adcock designed a filmless camera that was not digital and applied for a patent in 1972, but it is not known whether it was ever built.[15] The first recorded attempt at building a digital camera was in 1975 by Steven Sasson, an engineer at Eastman Kodak.[16][17] It used the then-new solid-state CCD image sensor chips developed by Fairchild Semiconductor in 1973.[18] The camera weighed 8 pounds (3.6 kg), recorded black and white images to a compact cassette tape, had a resolution of 0.01 megapixels (10,000 pixels), and took 23 seconds to capture its first image in December 1975. The prototype camera was a technical exercise, not intended for production.

tmjs ch2
ch2 retro cam and film

LIGHTING

<iframe style=”width:120px;height:240px;” marginwidth=”0″ marginheight=”0″ scrolling=”no” frameborder=”0″ src=”//ws-na.amazon-adsystem.com/widgets/q?ServiceVersion=20070822&OneJS=1&Operation=GetAdHtml&MarketPlace=US&source=ac&ref=tf_til&ad_type=product_link&tracking_id=wwwsmokingm0f-20&marketplace=amazon&region=US&placement=B00E4YS2XU&asins=B00E4YS2XU&linkId=2UF6NNYEYRIQU3WW&show_border=true&link_opens_in_new_window=true”>
</iframe>

In the early days of photography the only source of light was, of course, the sun, so most photography depended upon long days and good weather. It is said that Rejlander used a cat as a primitive exposure meter: placing the cat where the sitter should be, he judged by looking at its eyes whether it was worth taking any photographs or whether his sitter should go home and wait for better times! The nearer to the birth of photography, the greater the amount of lighting needed, as the first chemical emulsions were very insensitive.

The first artificial light photography dates back as far as 1839, when L. Ibbetson used oxy-hydrogen light (also known as limelight) when photographing microscopic objects; he made a daguerreotype in five minutes which, he claimed, would have taken twenty-five minutes in normal daylight.

Other possibilities were explored. Nadar, for example, photographed the sewers in Paris, using battery-operated lighting. Later arc-lamps were introduced, but it was not until 1877 that the first studio lit by electric light was opened by Van der Weyde, who had a studio in Regent Street. Powered by a gas-driven dynamo, the light was sufficient to permit exposures of some 2 to 3 seconds for a carte-de-visite.

Soon a number of studios started using arc lighting. One advert (by Arthur Langton, working in Belgravia, London), boldly proclaims:

“My electric light installation is perhaps the more powerful in London. Photographs superior to daylight, Pictures can now be taken in any weather and at any time.”

More from Arthur Langton’s advertisement:

“CAUTION Many photographers advertise ‘portrits taken by electric light’ but 9 out of 10 do not possess an electric light, owing to its costlinss they use an inferior and nasty substitute… a pyrotechnic powder which gives off poisonos fumes.”

(His spelling, by the way!)

In June 1850 an experiment conducted by Fox Talbot, probably using static electricity stored in Leyden jars, was conducted at the Royal Society: a page of The Times was fastened on to a wheel, which then revolved rapidly. Writing about this the following year Fox Talbot stated:

“From this experiment the conclusion…is that it is within our power to obtain pictures of all moving objects….providing we have the means of sufficiently illuminating them with a sudden electric flash.”

The object then had been to arrest fast action. A few years later William Crookes, editor of the Photographic News (October 1859) was responding to a query put to him on how to light some caves:

“A…brilliant light…can be obtained by burning….magnesium in oxygen. A piece of magnesium wire held by one end in the hand, may be lighted at the other extremity by holding it to a candle… It then burns away of its own accord evolving a light insupportably brilliant to the unprotected eye….”

That same year Professor Robert Bunsen (of Bunsen burner fame) was also advocating the use of magnesium. The first portrait using magnesium was taken by Alfred Brothers of Manchester (22 February 1864); some of the results of his experiments may be found in the Manchester Museum of Science and Technology. It was however very expensive at that time and did not come into general use until there was a dramatic fall in the cost of magnesium a decade later. This, coupled with the introduction of dry plates in the 80s soon led to the introduction of magnesium flashlamps. They all used the same principle: a small amount of this powder would be blown, using a small rubber pump, through a spirit flame, producing a bright flash lasting about 1/15s. It also produced much smoke and ash!

Then in the late 1880s it was discovered that magnesium powder, if mixed with an oxidising agent such as potassium chlorate, would ignite with very little persuasion. This led to the introduction of flash powder. It would be spread on a metal dish the flash powder would be set of by percussion – sparks from a flint wheel, electrical fuse or just by applying a taper. However the explosive flashpowder could be quite dangerous if misused. This was not really superseded until the invention of the flashbulb in the late 1920s.

Early flash photography was not synchronised. This meant that one had to put a camera on a tripod, open the shutter, trigger the flash, and close the shutter again – a technique known as open flash.

Certainly early flash photography could be a hazardous business. It is said, for example, that Riis, working during this period, twice managed to set the places he was photographing on fire!

In fact, the “open flash” technique, with flash powder, was still being used by some photographers until the 1950s. This was particularly so when, for example, a large building was being photographed; with someone operating the shutter for multiple exposures, it was possible to use the flash at different places, to provide more even illumination.

By varying the amount of grammes of flash-powder, the distance covered could also be varied. To give some idea, using a panchromatic film of about 25ASA and open flash technique, at f8, a measure of 0.1 grammes of flash would permit the flash-subject idstance to be about 8 feet, whilst 2.0 grammes would permit an exposure 30 feet away. The earliest known flash bulb was described in 1883. It consisted of a two pint stoppered bottle which had white paper stuck on it to act as a reflector. To set the flash off, a spiral of ten or so inches of magnesium on a wire skewer was pre-lighted and plunged into the oxygen.

It was not to be until 1927 that the simple flash-bulb was to appear, and 1931 when Harold Egerton produced the first electronic flash tube.

Makeup

HISTORY

Makeup has a long theatrical history. The early film industry naturally looked to traditional stage techniques, but these proved inadequate almost immediately. One of makeup’s first problems was with celluloid. Early filmmakers used orthochromatic film stock, which had a limited color-range sensitivity. It reacted to red pigmentation, darkening white skin and nullifying solid reds. To counter the effect, Caucasian actors wore heavy pink greasepaint (Stein’s #2) as well as black eyeliner and dark red lipstick (which, if applied too lightly, appeared white on screen), but these masklike cosmetics smeared as actors sweated under the intense lights. Furthermore, until the mid-teens, actors applied their own makeup and their image was rarely uniform from scene to scene. As the close-up became more common, makeup focused on the face, which had to be understood from a hugely magnified perspective, making refinements essential. In the pursuit of these radical changes, two names stand out as Hollywood’s progenitor artists: Max Factor (1877–1938) and George Westmore (1879–1931). Both started as wigmakers and both recognized that the crucial difference between stage and screen was a lightness of touch. Both invented enduring cosmetics and makeup tricks for cinema and each, at times, took credit for the same invention (such as false eyelashes).

Factor (originally Firestein), a Russian émigré with a background in barbering, arrived in the United States in 1904 and moved to Los Angeles in 1908, where he set up a perfume, hair care, and cosmetics business catering to theatrical needs. He also distributed well-known greasepaints, which were too thick for screen use and photographed badly. By 1910, Factor had begun to divide the theatrical from the cinematic as he experimented to find appropriate cosmetics for film. His Greasepaint was the first makeup used in a screen test, for Cleopatra (1912), and by 1914 Factor had invented a twelve-toned cream version, which applied thinly, allowed for individual skin subtleties, and conformed more comfortably with celluloid. In the early 1920s panchromatic film began to replace orthochromatic, causing fewer color flaws, and in 1928 Factor completed work on Panchromatic MakeUp, which had a variety of hues. In 1937, the year before he died, he dealt with the new Technicolor problems by adapting theatrical “pancake” into a water-soluble powder, applicable with a sponge, excellent for film’s and, eventually, television’s needs. It photographed very well, eliminating the shine induced by Technicolor lighting, and its basic translucence imparted a delicate look. Known as Pancake makeup, it was first used in Vogues of 1938 (1937) and Goldwyn’s Follies (1938), quickly becoming not only the film industry norm but a public sensation. Once movie stars, delighting in its lightness, began to wear it offscreen, Pancake became de rigueur for fashion-conscious women. After Factor’s death, his empire continued to set standards and still covers cinema’s cosmetic needs, from fingernails to toupees.

The English wigmaker George Westmore, for whom the Makeup Artist and Hair Stylist Guild’s George Westmore Lifetime Achievement Award is named, founded the first (and tiny) film makeup department, at Selig Studio in 1917. He also worked at Triangle but soon was freelancing across the major studios. Like Factor, he understood that cosmetic and hair needs were personal and would make up stars such as Mary Pickford (whom he relieved of having to curl her famous hair daily by making false ringlets) or the Talmadge sisters in their homes before they left for work in the morning.

He fathered three legendary and scandalous generations of movie makeup artists, beginning with his six sons—Monte (1902–1940), Perc (1904–1970), Ern (1904–1967), Wally (1906–1973), Bud (1918–1973), and Frank (1923–1985)—who soon eclipsed him in Hollywood. By 1926, Monte, Perc, Ern, and Bud had penetrated the industry to become the chief makeup artists at four major studios, and all continued to break ground in new beauty and horror illusions until the end of their careers. In 1921, after dishwashing at Famous Players-Lasky, Monte became Rudolph Valentino’s sole makeup artist. (The actor had been doing his own.) When Valentino died in 1926, Monte went to Selznick International where, thirteen years later, he worked himself to death with the enormous makeup demands for Gone With the Wind (1939). In 1923 Perc established a blazing career at First National-Warner Bros. and, over twenty-seven years, initiated beauty trends and disguises including, in 1939, the faces of Charles Laughton’s grotesque Hunchback of Notre Dame (for RKO) and Bette Davis’s eyebrowless, almost bald, whitefaced Queen Elizabeth. In the early 1920s he blended Stein Pink greasepaint with eye shadow, preceding Factor’s Panchromatic. Ern, at RKO from 1929 to 1931 and then at Fox from 1935, was adept at finding the right look for stars of the 1930s. Wally headed Paramount makeup from 1926, where he created, among others, Frederic March’s gruesome transformation in Dr. Jekyl and Mr. Hyde (1931). Frank followed him there. Bud led Universal’s makeup department for twenty-three years, specializing in rubber prosthetics and body suits such as the one used in Creature from the Black Lagoon (1954). Together they built the House of Westmore salon, which served stars and public alike.
Later generations have continued the name, including Bud’s sons, Michael and Marvin Westmore, who began in television and have excelled in unusual makeup, such as in Blade Runner (1982).

MGM was the only studio that the Westmores did not rule. Cecil Holland (1887–1973) became its first makeup head in 1925 and remained there until the 1950s. Originally an English actor known as “The Man of a Thousand Faces” before Lon Chaney (1883–1930) inherited the title, his makeup abilities were pioneering on films such as Grand Hotel (1932) and The Good Earth (1937). Jack Dawn (1892–1961), who created makeup for The Wizard of Oz (1939), ran the department from the 1940s, by which time it was so huge that over a thousand actors could be made up in one hour. William

Lon Chaney did his own makeup for Phantom of the Opera (Rupert Julian, 1925).
Tuttle succeeded him and ran the department for twenty years. Like Holland, Chaney was another actor with supernal makeup skills whose horror and crime films became classics, notably for Chaney’s menacing but realistically based disguises. He always created his own makeup, working with the materials of his day—greasepaint, putty, plasto (mortician’s wax), fish skin, gutta percha (natural resin), collodian (liquid elastic), and crepe hair—and conjured characters unrivalled in their horrifying effect, including his gaunt, pig-nosed, black-eyed Phantom for Phantom of the Opera (1925) and his Hunchback in The Hunchback of Notre Dame (1923), for which he constructed agonizingly heavy makeup and body harnesses.

tmjs ch 3
ch3 digital
Digital cameras
See also: Dslr § History
Digital cameras differ from their analog predecessors primarily in that they do not use film, but capture and save photographs on digital memory cards or internal storage instead. Their low operating costs have relegated chemical cameras to niche markets. Digital cameras now include wireless communication capabilities (for example Wi-Fi or Bluetooth) to transfer, print or share photos, and are commonly found on mobile phones.

Early development
The concept of digitizing images on scanners, and the concept of digitizing video signals, predate the concept of making still pictures by digitizing signals from an array of discrete sensor elements. Early spy satellites used the extremely complex and expensive method of de-orbit and airborne retrieval of film canisters. Technology was pushed to skip these steps through the use of in-satellite developing and electronic scanning of the film for direct transmission to the ground. The amount of film was still a major limitation, and this was overcome and greatly simplified by the push to develop an electronic image capturing array that could be used instead of film. The first electronic imaging satellite was the KH-11 launched by the NRO in late 1976. It had a charge-coupled device (CCD) array with a resolution of 800 x 800 pixels (0.64 megapixels).[13] At Philips Labs in New York, Edward Stupp, Pieter Cath and Zsolt Szilagyi filed for a patent on “All Solid State Radiation Imagers” on 6 September 1968 and constructed a flat-screen target for receiving and storing an optical image on a matrix composed of an array of photodiodes connected to a capacitor to form an array of two terminal devices connected in rows and columns. Their US patent was granted on 10 November 1970.[14] Texas Instruments engineer Willis Adcock designed a filmless camera that was not digital and applied for a patent in 1972, but it is not known whether it was ever built.[15] The first recorded attempt at building a digital camera was in 1975 by Steven Sasson, an engineer at Eastman Kodak.[16][17] It used the then-new solid-state CCD image sensor chips developed by Fairchild Semiconductor in 1973.[18] The camera weighed 8 pounds (3.6 kg), recorded black and white images to a compact cassette tape, had a resolution of 0.01 megapixels (10,000 pixels), and took 23 seconds to capture its first image in December 1975. The prototype camera was a technical exercise, not intended for production.

Development of digital photography
Main article: Digital photography
In 1957, a team led by Russell A. Kirsch at the National Institute of Standards and Technology developed a binary digital version of an existing technology, the wirephoto drum scanner, so that alphanumeric characters, diagrams, photographs and other graphics could be transferred into digital computer memory. One of the first photographs scanned was a picture of Kirsch’s infant son Walden. The resolution was 176×176 pixels with only one bit per pixel, i.e., stark black and white with no intermediate gray tones, but by combining multiple scans of the photograph done with different black-white threshold settings, grayscale information could also be acquired.[28]

The charge-coupled device (CCD) is the image-capturing optoelectronic component in first-generation digital cameras. It was invented in 1969 by Willard Boyle and George E. Smith at AT&T Bell Labs as a memory device. The lab was working on the Picturephone and on the development of semiconductor bubble memory. Merging these two initiatives, Boyle and Smith conceived of the design of what they termed “Charge ‘Bubble’ Devices”. The essence of the design was the ability to transfer charge along the surface of a semiconductor. It was Dr. Michael Tompsett from Bell Labs however, who discovered that the CCD could be used as an imaging sensor. The CCD has increasingly been replaced by the active pixel sensor (APS), commonly used in cell phone cameras.
Analog electronic cameras

Sony Mavica, 1981
Main article: Still video camera
Handheld electronic cameras, in the sense of a device meant to be carried and used like a handheld film camera, appeared in 1981 with the demonstration of the Sony Mavica (Magnetic Video Camera). This is not to be confused with the later cameras by Sony that also bore the Mavica name. This was an analog camera, in that it recorded pixel signals continuously, as videotape machines did, without converting them to discrete levels; it recorded television-like signals to a 2 × 2 inch “video floppy”.[19] In essence it was a video movie camera that recorded single frames, 50 per disk in field mode and 25 per disk in frame mode. The image quality was considered equal to that of then-current televisions.

Canon RC-701, 1986
Analog electronic cameras do not appear to have reached the market until 1986 with the Canon RC-701. Canon demonstrated a prototype of this model at the 1984 Summer Olympics, printing the images in the Yomiuri Shinbun, a Japanese newspaper. In the United States, the first publication to use these cameras for real reportage was USA Today, in its coverage of World Series baseball. Several factors held back the widespread adoption of analog cameras; the cost (upwards of $20,000), poor image quality compared to film, and the lack of quality affordable printers. Capturing and printing an image originally required access to equipment such as a frame grabber, which was beyond the reach of the average consumer. The “video floppy” disks later had several reader devices available for viewing on a screen, but were never standardized as a computer drive.

The early adopters tended to be in the news media, where the cost was negated by the utility and the ability to transmit images by telephone lines. The poor image quality was offset by the low resolution of newspaper graphics. This capability to transmit images without a satellite link was useful during the Tiananmen Square protests of 1989 and the first Gulf War in 1991.

US government agencies also took a strong interest in the still video concept, notably the US Navy for use as a real time air-to-sea surveillance system.

The first analog electronic camera marketed to consumers may have been the Casio VS-101 in 1987. A notable analog camera produced the same year was the Nikon QV-1000C, designed as a press camera and not offered for sale to general users, which sold only a few hundred units. It recorded images in greyscale, and the quality in newspaper print was equal to film cameras. In appearance it closely resembled a modern digital single-lens reflex camera. Images were stored on video floppy disks.

Silicon Film, a proposed digital sensor cartridge for film cameras that would allow 35 mm cameras to take digital photographs without modification was announced in late 1998. Silicon Film was to work like a roll of 35 mm film, with a 1.3 megapixel sensor behind the lens and a battery and storage unit fitting in the film holder in the camera. The product, which was never released, became increasingly obsolete due to improvements in digital camera technology and affordability. Silicon Films’ parent company filed for bankruptcy in 2001.[20]

Arrival of true digital cameras

The first portable digital SLR camera, introduced by Minolta in 1995.

Nikon D1, 1999
By the late 1980s, the technology required to produce truly commercial digital cameras existed. The first true portable digital camera that recorded images as a computerized file was likely the Fuji DS-1P of 1988, which recorded to a 2 MB SRAM memory card that used a battery to keep the data in memory. This camera was never marketed to the public.

The first digital camera of any kind ever sold commercially was possibly the MegaVision Tessera in 1987 [21] though there is not extensive documentation of its sale known. The first portable digital camera that was actually marketed commercially was sold in December 1989 in Japan, the DS-X by Fuji[22] The first commercially available portable digital camera in the United States was the Dycam Model 1, first shipped in November 1990.[23] It was originally a commercial failure because it was black and white, low in resolution, and cost nearly $1,000 (about $2000 in 2014).[24] It later saw modest success when it was re-sold as the Logitech Fotoman in 1992. It used a CCD image sensor, stored pictures digitally, and connected directly to a computer for download.[25][26][27]

In 1991, Kodak brought to market the Kodak DCS (Kodak Digital Camera System), the beginning of a long line of professional Kodak DCS SLR cameras that were based in part on film bodies, often Nikons. It used a 1.3 megapixel sensor, had a bulky external digital storage system and was priced at $13,000. At the arrival of the Kodak DCS-200, the Kodak DCS was dubbed Kodak DCS-100.

The move to digital formats was helped by the formation of the first JPEG and MPEG standards in 1988, which allowed image and video files to be compressed for storage. The first consumer camera with a liquid crystal display on the back was the Casio QV-10 developed by a team lead by Hiroyuki Suetaka in 1995. The first camera to use CompactFlash was the Kodak DC-25 in 1996.[citation needed]. The first camera that offered the ability to record video clips may have been the Ricoh RDC-1 in 1995.

In 1995 Minolta introduced the RD-175, which was based on the Minolta 500si SLR with a splitter and three independent CCDs. This combination delivered 1.75M pixels. The benefit of using an SLR base was the ability to use any existing Minolta AF mount lens. 1999 saw the introduction of the Nikon D1, a 2.74 megapixel camera that was the first digital SLR developed entirely from the ground up by a major manufacturer, and at a cost of under $6,000 at introduction was affordable by professional photographers and high-end consumers. This camera also used Nikon F-mount lenses, which meant film photographers could use many of the same lenses they already owned.

Digital camera sales continued to flourish, driven by technology advances. The digital market segmented into different categories, Compact Digital Still Cameras, Bridge Cameras, Mirrorless Compacts and Digital SLRs. One of the major technology advances was the development of CMOS sensors, which helped drive sensor costs low enough to enable the widespread adoption of camera phones.

1973 – Fairchild Semiconductor releases the first large image-capturing CCD chip: 100 rows and 100 columns.[29]
1975 – Bryce Bayer of Kodak develops the Bayer filter mosaic pattern for CCD color image sensors
1986 – Kodak scientists develop the world’s first megapixel sensor.
The web has been a popular medium for storing and sharing photos ever since the first photograph was published on the web by Tim Berners-Lee in 1992 (an image of the CERN house band Les Horribles Cernettes). Today popular sites such as Flickr, Picasa, Instagram and PhotoBucket are used by millions of people to share their pictures.

Perfect, BackSeams…”HowTo”

IMG_20140927_045303

Seamed: Stockings manufactured in the old Full-Fashioned manner with a seam running up the back of the leg. In the past they were manufactured by cutting the fabric and then sewing it together. Today stockings are generally fully knitted and a fake or mock seam is added up the back for a particular fashion look. Some brands also produce seamed hold-ups.
images-21

Hosiery, also referred to as legwear, describes garments worn directly on the feet and legs. The term originated as the collective term for products of which a maker or seller is termed a hosier; and those products are also known generically as hose. The term is also used for all types of knitted fabric, and its thickness and weight is defined in terms of denier or opacity. Lower denier measurements of 5 to 15 describe a hose which may be sheer in appearance, whereas styles of 40 and above are dense, with little to no light able to come through on 100 denier items.

The first references to hosiery can be found in works of Hesiod, where Romans are said to have used leather or cloth in forms of strips to cover their lower body parts. Even the Egyptians are speculated to have used hosiery as socks have been found in certain tombs.

images-22
Before the 1920s, women’s stockings, if worn, were worn for warmth. In the 1920s, as hemlines of women’s dresses rose, women began to wear stockings to cover the exposed legs. These stockings were sheer, first made of silk or rayon (then known as “artificial silk”), and after 1940 of nylon.
images-32

images-33

images-34

images-35

Paint-on Hosiery During the War Years

A back “seam” drawn with an eyebrow pencil topped off the resourceful fashion effect
So it’s Saturday night in 1941, and you want to wear stockings with your cocktail dress, but the new wonder material nylon has been rationed for the war effort and has disappeared from department store shelves. What do you do in such times of patriotic privation? You get resourceful, and cover your legs with a layer of nude-colored makeup, and line the back of each leg with a trompe l’oeil seam.

Last week, in the first post from the Stocking Series, we heard about the huge reception of nylon hosiery. On May 16, 1940, officially called “Nylon Day,” four million pairs of nylons landed in stores and sold out within two days! But only a year later, the revolutionary product became scarce when the World War II economy directed all nylon into manufacturing parachutes, rope and netting.
Having trouble with your seam? No problem! This contraption, made from a screwdriver handle, bicycle leg clip and an ordinary eyebrow pencil would do the trick!

images-36

images-38
images-42


images-52

 

The Mary Jane Style HowTo-Ep#18 Might seem silly Topic is june13,1934


YouTube Channel please subscribe !!!

The Best preparation of The Future is knowledge of The Past

Topic is June13,1934 Production Code followed by any filmmaker who would want Theatrical distribution = censorship Is there a separation of church and state… what was artistic freedom from 1934-1968… At the end of the video I added a little funny impression of not all but, a sample of possible working environment very different from Todays
The last decade or more there has been minimum censorship at least here on the internet I choose this topic due to questions or concerns on how is the internet going to be censored or monitored… this video is simply the reading of The …
The Motion Picture Production Code was the set of industry moral censorship guidelines that governed the production of most United States motion pictures released by major studios from 1930 to 1968.
It is also popularly known as the Hays Code, after Hollywood’s chief censor of the time, Will H. Hays. The Motion Pictures Producers and Distributors of America (MPPDA), which later became the Motion Picture Association of America (MPAA), adopted the code in 1930, began enforcing it in 1934, and abandoned it in 1968, in favor of the subsequent MPAA film rating system.
The Production Code spelled out what was acceptable and what was unacceptable content for motion pictures produced for a public audience in the United States. The office enforcing it was popularly called the Hays Office in reference to Hays, inaccurately so after 1934 when Joseph Breen took over from Hays, creating the Breen Office, which was far more rigid in censoring films than Hays had been.
Where Did this idea come from?  In 1922, after several risqué films and a series of off-screen scandals involving Hollywood stars, the studios enlisted Presbyterian elder Will H. Hays to rehabilitate Hollywood’s image. Hollywood in the 1920s was expected to be somewhat corrupt, and many felt the movie industry had always been morally questionable. Political pressure was increasing, with legislators in 37 states introducing almost one hundred movie censorship bills in 1921. Hays was paid the then-lavish sum of $100,000 a year. Hays, Postmaster General under Warren G. Harding and former head of the Republican National Committee, served for 25 years as president of the Motion Picture Producers and Distributors of America (MPPDA), where he “defended the industry from attacks, recited soothing nostrums, and negotiated treaties to cease hostilities.”
MPPDA2b
The move mimicked the decision Major League Baseball had made in hiring judge Kennesaw Mountain Landis as League Commissioner the previous year to quell questions about the integrity of baseball in the wake of the 1919 World Series gambling scandal; The New York Times even called Hays the “screen Landis”.
In 1924, Hays introduced a set of recommendations dubbed “The Formula” which the studios were advised to heed, and asked filmmakers to describe to his office the plots of pictures they were planning on making. The Supreme Court had already decided unanimously in 1915 in Mutual Film Corporation v. Industrial Commission of Ohio that free speech did not extend to motion pictures, and while there had been token attempts to clean up the movies before—such as when the studios formed the National Association of the Motion Picture Industry (NAMPI) in 1916—little had come of the efforts.

New York became the first state to take advantage of the Supreme Court’s decision by instituting a censorship board in 1921. Virginia followed suit the following year, with eight individual states having a board by the advent of sound film, But many of these were ineffectual. By the 1920s, the New York stage—a frequent source of subsequent screen material—had topless shows, performances filled with curse words, mature subject matters, and sexually suggestive dialogue. Early in the sound system conversion process, it became apparent that what might be acceptable in New York would not be so in Kansas.

In 1927, Hays suggested to studio executives that they form a committee to discuss film censorship. Irving G. Thalberg of Metro Goldwyn Mayer (MGM), Sol Wetzel of Fox, and E. H. Allen of Paramount responded by collaborating on a list they called the “Don’ts and Be careful”, which was based on items that were challenged by local censor boards. This list consisted of eleven subjects best avoided and twenty-six to be handled very carefully. The list was approved by the Federal Trade Commission (FTC), and Hays created the Studio Relations Committee (SRC) to oversee its implementation. However, there was still no way to enforce tenets. The controversy surrounding film standards came to a head in 1929

In 1929, the lay Catholic Martin Quigley (editor of the prominent trade paper Motion Picture Herald) and the Jesuit priest Father Daniel A. Lord created a code of standards and submitted it to the studios. The Lord was particularly concerned with the effects of sound film on children, whom he considered especially susceptible to their allure. In February 1930, several studio heads—including Irving Thalberg of Metro-Goldwyn-Mayer (MGM)—met with Lord and Quigley. After some revisions, they agreed to the stipulations of the Code. One of the main motivating factors in adopting the Code was to avoid direct government intervention. It was the responsibility of the SRC (headed by Colonel Jason S. Joy, a former American Red Cross Executive Secretary) to supervise film production and advise the studios when changes or cuts were required. On March 31, the MPPDA agreed that it would abide by the Code.

imagesBHVPW0M5 IMG_5487
imagesUGO886ED
The code was divided into two parts. The first was a set of “general principles” which mostly concerned morality.
The second was a set of “particular applications” which was an exacting list of items which could not be depicted. Some restrictions, such as the ban on homosexuality or on the use of specific curse words, were never directly mentioned, but were assumed to be understood without clear demarcation. Depiction of miscegenation (i.e. marital or sexual relations between different races) was forbidden. It also stated that the notion of an “adults-only policy” would be a dubious, ineffective strategy which would be difficult to enforce. However, it did allow that “maturer minds may easily understand and accept without harm subject matter in plots which does younger people positive harm.” If children were supervised and the events implied elliptically, the code allowed “the possibility of a cinematically inspired thought crime.”

The production code sought not only to determine what could be portrayed on screen but also to promote traditional values. Sexual relations outside of marriage—which were forbidden from being portrayed as attractive or beautiful—were to be presented in a way that would not arouse passion or make them seem permissible.
All criminal action had to be punished, and neither the crime nor the criminal could elicit sympathy from the audience, or the audience must at least be aware that such behavior is wrong, usually through “compensating moral value”.
Authority figures had to be treated with respect, and the clergy could not be portrayed as comic characters or villains. Under some circumstances, politicians, police officers, and judges could be villains, as long as it was clear that those individuals portrayed as villains were the exceptions to the rule.

The entire document was written with Catholic undertones and stated that art must be handled carefully because it could be “morally evil in its effects” and because its “deep moral significance” was unquestionable. It was initially decided to keep the Catholic influence on the Code secret. A recurring theme was “that throughout, the audience feels sure that evil is wrong and good is right”. The Code also contained an addendum commonly referred to as the Advertising Code which regulated advertising copy and imagery.
The first film the office reviewed, The Blue Angel, which was passed by Joy with no revisions, was considered indecent by a California censor. Although there were several instances where Joy negotiated cuts from films and there were indeed definite—albeit loose—constraints, a significant amount of lurid material made it to the screen. Joy had to review 500 films a year with a small staff and little power. He was more willing to work with the studios, and his creative writing skills led to his hiring at Fox. On the other hand, Wingate struggled to keep up with the flood of scripts coming in, to the point where Warner Bro.’s head of production Darryl Zanuck wrote him a letter imploring him to pick up the pace.
In 1930, the Hays office did not have the authority to order studios to remove material from a film, and instead worked by reasoning and sometimes pleading with them. Complicating matters, the appeals process ultimately put the responsibility for making the final decision in the hands of the studios.

One factor in ignoring the code was the fact that some found such censorship prudish, due to the libertine social attitudes of the 1920s and early 1930s. This was a period in which the Victorian era was sometimes ridiculed as being naïve and backward. When the Code was announced, liberal periodical The Nation attacked it.

The publication stated that if crime were never to be presented in a sympathetic light, then taken literally that would mean that “law” and “justice” would become one and the same. Therefore, events such as the Boston Tea Party could not be portrayed. If clergy must always be presented in a positive way, then hypocrisy could not be dealt with either. The Outlook agreed, and, unlike Variety, The Outlook predicted from the beginning that the Code would be difficult to enforce. The Great Depression of the 1930s led many studios to seek income by any way possible. Since films containing racy and violent content resulted in high ticket sales, it seemed reasonable to continue producing such films. Soon, the flouting of the code became an open secret. In 1931, the Hollywood Reporter mocked the code and Variety followed suit in 1933. In the same year as the Variety article, a noted screenwriter stated that “the Hays moral code is not even a joke any more; it’s just a memory.”

On June 13, 1934, an amendment to the Code was adopted which established the Production Code Administration (PCA) and required all films released on or after July 1, 1934, to obtain a certificate of approval before being released. The PCA had two offices—one in Hollywood and the other in New York City. The first film to receive an MPPDA seal of approval was The World Moves On. For more than thirty years, virtually all motion pictures produced in the United States adhered to the code. The Production Code was not created or enforced by federal, state, or city government; the Hollywood studios adopted the code in large part in the hopes of avoiding government censorship, preferring self-regulation to government regulation. The enforcement of the Production Code led to the dissolution of many local censorship boards.

 

y untitled sec 1934

Hollywood worked within the confines of the Production Code until the late 1950s and the movies were faced with very serious competitive threats. The first threat came from a new technology, television, which did not require Americans to leave their house to watch moving pictures. Hollywood needed to offer the public something it could not get on television, which itself was under an even more restrictive censorship code.

In addition to the threat of television, there was also increasing competition from foreign films, such as
Vittorio De Sica’s Bicycle Thieves (1948),
the Swedish film One Summer of Happiness (1951),
and Ingmar Bergman’s Summer with Monika (1953).
Vertical integration in the movie industry had been found to violate anti-trust laws, and studios had been forced to give up ownership of theatres by the Supreme Court in United States v. Paramount Pictures, Inc. (1948). The studios had no way to keep foreign films out, and foreign films were not bound by the Production Code. (For De Sica’s film, there was a censorship controversy when the MPAA demanded a scene where the lead characters talk to the prostitutes of a brothel be removed, regardless of the fact that there was no sexual or provocative activity.) Some British films—such as
Victim (1961),
A Taste of Honey (1961),
and The Leather Boys (1963)
—challenged traditional gender roles and openly confronted the prejudices against homosexuals, all in clear violation of the Hollywood Production Code. In keeping with the changes in society, sexual content that would have previously been banned by the Code was being retained. The anti-trust rulings also helped pave the way for independent art houses that would show films created by people such as Andy Warhol who worked outside the studio system.

In 1952, in the case of Joseph Burstyn, Inc. v. Wilson, the U.S. Supreme Court unanimously overruled its 1915 decision
(Mutual Film Corporation v. Industrial Commission of Ohio) and held that motion pictures were entitled to First Amendment protection, so that the New York State Board of Regents could not ban
“The Miracle”, a short film that was one half of L’Amore (1948),
an anthology film directed by Roberto Rossellini. Film distributor Joseph Burstyn released the film in the U.S. in 1950, and the case became known as the “Miracle Decision” due to its connection to Rossellini’s film. That reduced the threat of government regulation, which had formerly been cited as justification for the Production Code, and the PCA’s powers over the Hollywood industry were greatly reduced.

By the 1950s, American culture also began to change. A boycott by the National Legion of Decency no longer guaranteed a film’s commercial failure, and several aspects of the code had slowly lost their taboo. In 1956, areas of the code were rewritten to accept subjects such as miscegenation, adultery, and prostitution. For example, the remake of a pre-Code film dealing with prostitution, Anna Christie, was cancelled by MGM twice, in 1940 and in 1946, as the character of Anna was not allowed to be portrayed as a prostitute. By 1962, such subject matter was acceptable and the original film was given a seal of approval.

By the late 1950s, increasingly explicit films began to appear, such as
Anatomy of a Murder (1959),
Suddenly Last Summer (1959),
and The Dark at the Top of the Stairs (1961).
The MPAA reluctantly granted the seal of approval for these films, although not until certain cuts were made. Due to its themes, Billy Wilder’s
Some Like It Hot (1959)
was not granted a certificate of approval, but it still became a box office smash, and, as a result, it further weakened the authority of the Code. At the forefront of contesting the Code was director Otto Preminger, whose films violated the Code repeatedly in the 1950s.
His 1953 film The Moon Is Blue — about a young woman who tries to play two suitors off against each other by claiming that she plans to keep her virginity until marriage — was released without a certificate of approval. He later made
The Man with the Golden Arm (1955),
which portrayed the prohibited subject of drug abuse, and
Anatomy of a Murder (1959), which dealt with murder and rape.
Like Some Like It Hot, Preminger’s films were direct assaults on the authority of the Production Code, and their success hastened its abandonment. In the early 1960s, films began to deal with adult subjects and sexual matters that had not been seen in Hollywood films since the early 1930s. The MPAA reluctantly granted the seal of approval for these films, although again not until certain cuts were made.

In 1964, the Holocaust film The Pawnbroker, directed by Sidney Lumet and starring Rod Steiger, was initially rejected because of two scenes in which the actresses Linda Geiser and Thelma Oliver fully expose their breasts, as well as due to a sex scene between Oliver and Jaime Sánchez described as “unacceptably sex suggestive and lustful”. Despite the rejection, the film’s producers arranged for Allied Artists to release the film without the Production Code seal, with the New York censors licensing the film without the cuts demanded by Code administrators. The producers appealed the rejection to the Motion Picture Association of America.
On a 6–3 vote, the MPAA granted the film an exception conditional on “reduction in the length of the scenes which the Production Code Administration found unprovable.” The requested reductions of nudity were minimal; the outcome was viewed in the media as a victory for the film’s producers.
The Pawnbroker
was the first film featuring bare breasts to receive Production Code approval. The exception to the code was granted as a “special and unique case” and was described by The New York Times at the time as “an unprecedented move that will not, however, set a precedent”. However, in Pictures at a Revolution, Mark Harris’ 2008 study of films during that era, Harris wrote that the MPAA approval was “the first of a series of injuries to the Production Code that would prove fatal within three years.”

In 1966, Warner Bros. released Who’s Afraid of Virginia Woolf?, the first film to feature the “Suggested for Mature Audiences” (SMA) label. When Jack Valenti became President of the MPAA in 1966, he was faced with censoring the film’s explicit language. Valenti negotiated a compromise: the word “screw” was removed, but other language remained, including the phrase “hump the hostess”. The film received Production Code approval despite the previously prohibited language.

That same year, the British-produced, American-financed film Blowup was denied Production Code approval. MGM released it anyway, the first instance of an MPAA member company distributing a film that did not have an approval certificate. That same year, the original and lengthy code was replaced by a list of eleven points. The points outlined that the boundaries of the new code would be current community standards and good taste. In addition, any film containing content deemed to be suitable for older audiences would feature the label SMA in its advertising. With the creation of this new label, the MPAA unofficially began classifying films.
By the late 1960s, enforcement had become impossible and the Production Code was abandoned entirely. The MPAA began working on a rating system, under which film restrictions would lessen. The MPAA film rating system went into effect on November 1, 1968, with four ratings: G for general audiences, M for mature content, R for restricted (under 17 not admitted without an adult), and X for sexually explicit content. By the end of 1968, Geoffrey Shurlock stepped down from his post.[50][51] In 1969, the Swedish film I Am Curious (Yellow), directed by Vilgot Sjöman, was initially banned in the U.S. for its frank depiction of sexuality; however, this was overturned by the Supreme Court.

In 1970, because of confusion over the meaning of “mature audiences”, the M rating was changed to GP, and then in 1972 to the current PG, for “parental guidance suggested”. In 1984, in response to public complaints regarding the severity of horror elements in PG-rated titles such as Gremlins and Indiana Jones and the Temple of Doom, the PG-13 rating was created as a middle tier between PG and R. In 1990, the X rating was replaced by NC-17 (under 17 not admitted), partly because of the stigma associated with the X rating, and partly because the X rating was not trademarked by the MPAA; pornographic bookstores and theaters were using their own X and XXX symbols to market products.


tmjs e 18






TMJS HowTo Cabaret Ep#12

IMG_20131121_041210Teal and blue lovely eye shadow , I recommend MAC pigment powder or a strong eye shadow look for teal to marine to aqua or royal blue mixed with a light green plus white iridescent eye shadow for base eye shadow big lashes spike top lashes and spike bottom lashes or take a top lash and cut ½ or ¾ to place under your bottom lashes (if this is to much liquid liner can paint lower lash accents ruby lips and a classic beauty mark see photos to see how you can best adapt your choice of hair hat and wardrobe this is based on a burlesque performer


Shop Amazon – Gift Ideas in Beauty

Cabaret is a 1972 musical film a great film id recommend watching it’s a classic directed by Bob Fosse and starring Liza Minnelli, Michael York and Joel Grey. The film is set in Berlin during the Weimar Republic in 1931, under the ominous presence of the growing Nazi Party.
The film is loosely based on the 1966 Broadway musical Cabaret by Kander and Ebb, which was adapted from the novel The Berlin Stories (1939) by Christopher Isherwood and the 1951 play I Am a Camera adapted from the same book. Only a few numbers from the stage score were used for the film; Kander and Ebb wrote new ones to replace those that were discarded. In the traditional manner of musical theater, every significant character in the stage version of Cabaret sings to express emotion and advance the plot. In the film version, the musical numbers are entirely diegetic, taking place in the club, and just two of the film’s major characters (The Emcee and Sally) sing songs.

00-liza-minnelli-cabaret-throwback-thursday-230x220

04-liza-minnelli-cabaret-throwback-thursday-640x480

7442_2

cabaret1

cabaret2.preview

cabaret-3

Cabaret+Poster

Cabaret-wallpaper-cabaret-film-19901100-1024-768-e1361478502956

Liza+Minnelli+as+Sally+Bowles+in+her+Best+Actress+Oscar+winner+performance+in+Bob+fosse's+1972+directed+film+clasic+Cabaret,+Newsweek+cover

tumblr_lzmla3nEio1qbew2yo1_500

 

A book that covers this time period is Voluptuous Panic the erotic world of Weimar Berlin(a racy informative yes of adult/sexual nature but it is history )

1920s TheMaryJaneStyle



1379319419286Episode #2 1920s Part1

This Episode I explore the roaring 20s flapper girls silent films …doing research I learned a silent film actress was noted  as the original flapper girl [Read more...]