840 resultados para San Francisco Earthquake and Fire, Calif., 1906
Resumo:
"References": p. [209]-210.
Resumo:
O presente estudo buscou a compreensão dos motivos que levam as empresas pesquisadas do setor automobilístico a gerir o conhecimento tácito, mediados pela gestão do conhecimento, na área de gestão de desenvolvimento de produtos. As questões de pesquisa que este estudo objetivou responder foram: Como empresas estudadas utilizam o conhecimento tácito para se tornarem mais eficientes e eficazes nas atividades/ operações? De que forma o conhecimento tácito é percebido na organização por parte dos funcionários e gestores? Para responder a estas perguntas houve a investigação de duas empresas do setor automobilístico, na área de desenvolvimento de produtos. Como base teórica para o desenvolvimento do presente estudo autores como Davenport e Prusak (1998), Nonaka e Takeuchi (1997) e Choo (2006) orientam esta pesquisa. A pesquisa abrange duas empresas do ramo automobilístico na região do ABC Paulista, com equivalência no número de funcionários e porte. Dentre os entrevistados há funcionários e gestores de áreas de gestão de projetos e produtos. A metodologia aplicada ao estudo foi de caráter qualitativo por meio de pesquisa exploratória-descritiva, sendo que o método de coleta de dados se deu a partir de entrevistas semiestruturadas. O estudo investigou quais as práticas usadas para a conversão do conhecimento, fatores facilitadores e fatores dificultadores para a conversão do conhecimento e as principais contribuições da aplicação das práticas e iniciativas voltadas gerir o conhecimento tácito, sob a ótica dos gestores e funcionários. Através do presente estudo pode-se verificar que existe a preocupação com a gestão do conhecimento nas empresas estudadas e que há práticas diversas relativas ao conhecimento tácito e que as formas de disseminação deste conhecimento são distintas. Algumas das práticas são os cursos de especialização, brainstorming e lesson learned e conversas informais. Nos fatores facilitadores há a troca de informação entre os pares, reuniões semanais, equipes multidisciplinares/ multifuncionais. Nos fatores dificultadores há a indicação de questões comportamentais, acúmulo de funções e tempo para partilhar informações.
Resumo:
The present study has been carried out with the following objectives: i) To investigate the attributes of source parameters of local and regional earthquakes; ii) To estimate, as accurately as possible, M0, fc, Δσ and their standard errors to infer their relationship with source size; iii) To quantify high-frequency earthquake ground motion and to study the source scaling. This work is based on observational data of micro, small and moderate -earthquakes for three selected seismic sequences, namely Parkfield (CA, USA), Maule (Chile) and Ferrara (Italy). For the Parkfield seismic sequence (CA), a data set of 757 (42 clusters) repeating micro-earthquakes (0 ≤ MW ≤ 2), collected using borehole High Resolution Seismic Network (HRSN), have been analyzed and interpreted. We used the coda methodology to compute spectral ratios to obtain accurate values of fc , Δσ, and M0 for three target clusters (San Francisco, Los Angeles, and Hawaii) of our data. We also performed a general regression on peak ground velocities to obtain reliable seismic spectra of all earthquakes. For the Maule seismic sequence, a data set of 172 aftershocks of the 2010 MW 8.8 earthquake (3.7 ≤ MW ≤ 6.2), recorded by more than 100 temporary broadband stations, have been analyzed and interpreted to quantify high-frequency earthquake ground motion in this subduction zone. We completely calibrated the excitation and attenuation of the ground motion in Central Chile. For the Ferrara sequence, we calculated moment tensor solutions for 20 events from MW 5.63 (the largest main event occurred on May 20 2012), down to MW 3.2 by a 1-D velocity model for the crust beneath the Pianura Padana, using all the geophysical and geological information available for the area. The PADANIA model allowed a numerical study on the characteristics of the ground motion in the thick sediments of the flood plain.
Resumo:
Sometimes published as: The Blue & gold.
Resumo:
Shipping list no.: 2004-0128-P.
Resumo:
"A biographic record of 111 prominent musicians who have visited San Francisco and performed here from the earliest days of the gold rush era to the time of the great fire, with additional lists of visiting celebrities (1906-1940), chamber music ensembles, bands, orchestras, and other music-making bodies."--2d prelim. leaf.
Resumo:
Design as seen from the designer's perspective is a series of amazing imaginative jumps or creative leaps. But design as seen by the design historian is a smooth progression or evolution of ideas that they seem self-evident and inevitable after the event. But the next step is anything but obvious for the artist/creator/inventor/designer stuck at that point just before the creative leap. They know where they have come from and have a general sense of where they are going, but often do not have a precise target or goal. This is why it is misleading to talk of design as a problem-solving activity - it is better defined as a problem-finding activity. This has been very frustrating for those trying to assist the design process with computer-based, problem-solving techniques. By the time the problem has been defined, it has been solved. Indeed the solution is often the very definition of the problem. Design must be creative-or it is mere imitation. But since this crucial creative leap seem inevitable after the event, the question must arise, can we find some way of searching the space ahead? Of course there are serious problems of knowing what we are looking for and the vastness of the search space. It may be better to discard altogether the term "searching" in the context of the design process: Conceptual analogies such as search, search spaces and fitness landscapes aim to elucidate the design process. However, the vastness of the multidimensional spaces involved make these analogies misguided and they thereby actually result in further confounding the issue. The term search becomes a misnomer since it has connotations that imply that it is possible to find what you are looking for. In such vast spaces the term search must be discarded. Thus, any attempt at searching for the highest peak in the fitness landscape as an optimal solution is also meaningless. Futhermore, even the very existence of a fitness landscape is fallacious. Although alternatives in the same region of the vast space can be compared to one another, distant alternatives will stem from radically different roots and will therefore not be comparable in any straightforward manner (Janssen 2000). Nevertheless we still have this tantalizing possibility that if a creative idea seems inevitable after the event, then somehow might the process be rserved? This may be as improbable as attempting to reverse time. A more helpful analogy is from nature, where it is generally assumed that the process of evolution is not long-term goal directed or teleological. Dennett points out a common minsunderstanding of Darwinism: the idea that evolution by natural selection is a procedure for producing human beings. Evolution can have produced humankind by an algorithmic process, without its being true that evolution is an algorithm for producing us. If we were to wind the tape of life back and run this algorithm again, the likelihood of "us" being created again is infinitesimally small (Gould 1989; Dennett 1995). But nevertheless Mother Nature has proved a remarkably successful, resourceful, and imaginative inventor generating a constant flow of incredible new design ideas to fire our imagination. Hence the current interest in the potential of the evolutionary paradigm in design. These evolutionary methods are frequently based on techniques such as the application of evolutionary algorithms that are usually thought of as search algorithms. It is necessary to abandon such connections with searching and see the evolutionary algorithm as a direct analogy with the evolutionary processes of nature. The process of natural selection can generate a wealth of alternative experiements, and the better ones survive. There is no one solution, there is no optimal solution, but there is continuous experiment. Nature is profligate with her prototyping and ruthless in her elimination of less successful experiments. Most importantly, nature has all the time in the world. As designers we cannot afford prototyping and ruthless experiment, nor can we operate on the time scale of the natural design process. Instead we can use the computer to compress space and time and to perform virtual prototyping and evaluation before committing ourselves to actual prototypes. This is the hypothesis underlying the evolutionary paradigm in design (1992, 1995).
Resumo:
The ubiquity of multimodality in hypermedia environments is undeniable. Bezemer and Kress (2008) have argued that writing has been displaced by image as the central mode for representation. Given the current technical affordances of digital technology and user-friendly interfaces that enable the ease of multimodal design, the conspicuous absence of images in certain domains of cyberspace is deserving of critical analysis. In this presentation, I examine the politics of discourses implicit within hypertextual spaces, drawing textual examples from a higher education website. I critically examine the role of writing and other modes of production used in what Fairclough (1993) refers to as discourses of marketisation in higher education, tracing four pervasive discourses of teaching and learning in the current economy: i) materialization, ii) personalization, iii) technologisation, and iv) commodification (Fairclough, 1999). Each of these arguments is supported by the critical analysis of multimodal texts. The first is a podcast highlighting the new architectonic features of a university learning space. The second is a podcast and transcript of a university Open Day interview with prospective students. The third is a time-lapse video showing the construction of a new science and engineering precinct. These three multimodal texts contrast a final web-based text that exhibits a predominance of writing and the powerful absence or silencing of the image. I connect the weightiness of words and the function of monomodality in the commodification of discourses, and its resistance to the multimodal affordances of web-based technologies, and how this is used to establish particular sets of subject positions and ideologies through which readers are constrained to occupy. Applying principles of critical language study by theorists that include Fairclough, Kress, Lemke, and others whose semiotic analysis of texts focuses on the connections between language, power, and ideology, I demonstrate how the denial of image and the privileging of written words in the multimodality of cyberspace is an ideological effect to accentuate the dominance of the institution.
Resumo:
Normative influences on road user behaviour have been well documented and include such things as personal, group, subjective and moral norms. Commonly, normative factors are examined within one cultural context, although a few examples of exploring the issue across cultures exist. Such examples add to our understanding of differences in perceptions of the normative factors that may exert influence on road users and can assist in determining whether successful road safety interventions in one location may be successful in another. Notably, the literature is relatively silent on such influences in countries experiencing rapidly escalating rates of motorization. China is one such country where new drivers are taking to the roads in unprecedented numbers and authorities are grappling with the associated challenges. This paper presents results from qualitative and quantitative research on self-reported driving speeds of car drivers and related issues in Australia and China. Focus group interviews and questionnaires conducted in each country examined normative factors that might influence driving in each cultural context. Qualitative findings indicated perceptions of community acceptance of speeding were present in both countries but appeared more widespread in China, yet quantitative results did not support this difference. Similarly, with regard to negative social feedback from speeding, qualitative findings suggested no embarrassment associated with speeding among Chinese participants and mixed results among Australian participants, yet quantitative results indicated greater embarrassment for Chinese drivers. This issue was also examined from the perspective of self-identity and findings were generally similar across both samples and appear related to whether it is important to be perceived as a skilled/safe driver by others. An interesting and important finding emerged with regard to how Chinese drivers may respond to questions about road safety issues if the answers might influence foreigners’ perceptions of China. In attempting to assess community norms associated with speeding, participants were asked to describe what they would tell a foreign visitor about the prevalence of speeding in China. Responses indicated that if asked by a foreigner, people may answer in a manner that portrayed China as a safe country (e.g., that drivers do not speed), irrespective of the actual situation. This ‘faking good for foreigners’ phenomenon highlights the importance of considering ‘face’ when conducting research in China – a concept absent from the road safety literature. An additional noteworthy finding that has been briefly described in the road safety literature is the importance and strength of the normative influence of social networks (guanxi) in China. The use of personal networks to assist in avoiding penalties for traffic violations was described by Chinese participants and is an area that could be addressed to strengthen the deterrent effect of traffic law enforcement. Overall, the findings suggest important considerations for developing and implementing road safety countermeasures in different cultural contexts.
Resumo:
This paper offers insights into the relationship between curriculum decision making, positive school climate, and academic achievement for same-sex attracted (SSA) students. The authors use critical discourse analysis to present a ‘conversation’ between six same-sex attracted young people, aged 14-19, and three pop-culture texts currently popular with both teachers and school-aged peers: The Hunger Games, Tomorrow When the War Began, and Neighbours. Analysis starts from the perspective that schools are empowered agents in the production of students’ sexualised identities and seeks to understand how textual choices function as active discourse in that production. Through this analysis, an argument is made for expanding notions of what it means to ‘attend to’ gender and sexuality through textual choice and critical pedagogy.
Resumo:
Knowledge of the distribution and biology of the ragfish, Icosteus aenigmaticus, an aberrant deepwater perciform of the North Pacific Ocean, has increased slowly since the first description of the species in the 1880’s which was based on specimens retrieved from a fish monger’s table in San Francisco, Calif. As a historically rare, and subjectively unattractive appearing noncommercial species, ichthyologists have only studied ragfish from specimens caught and donated by fishermen or by the general public. Since 1958, I have accumulated catch records of >825 ragfish. Specimens were primarily from commercial fishermen and research personnel trawling for bottom and demersal species on the continental shelves of the eastern North Pacific Ocean, Gulf of Alaska, Bering Sea, and the western Pacific Ocean, as well as from gillnet fisheries for Pacific salmon, Oncorhynchus spp., in the north central Pacific Ocean. Available records came from four separate sources: 1) historical data based primarily on published and unpublished literature (1876–1990), 2) ragfish delivered fresh to Humboldt State University or records available from the California Department of Fish and Game of ragfish caught in northern California and southern Oregon bottom trawl fisheries (1950–99), 3) incidental catches of ragfish observed and recorded by scientific observers of the commercial fisheries of the eastern Pacific Ocean and catches in National Marine Fisheries Service trawl surveys studying these fisheries from 1976 to 1999, and 4) Japanese government research on nearshore fisheries of the northwestern Pacific Ocean (1950–99). Limited data on individual ragfish allowed mainly qualitative analysis, although some quantitative analysis could be made with ragfish data from northern California and southern Oregon. This paper includes a history of taxonomic and common names of the ragfish, types of fishing gear and other techniques recovering ragfish, a chronology of range extensions into the North Pacific and Bering Sea, reproductive biology of ragfish caught by trawl fisheries off northern California and southern Oregon, and topics dealing with early, juvenile, and adult life history, including age and growth, food habits, and ecology. Recommendations for future study are proposed, especially on the life history of juvenile ragfish (5–30 cm FL) which remains enigmatic.
Resumo:
Over the past decade, scientists have been called to participate more actively in public education and outreach (E&O). This is particularly true in fields of significant societal impact, such as earthquake science. Local earthquake risk culture plays a role in the way that the public engages in educational efforts. In this article, we describe an adapted E&O program for earthquake science and risk. The program is tailored for a region of slow tectonic deformation, where large earthquakes are extreme events that occur with long return periods. The adapted program has two main goals: (1) to increase the awareness and preparedness of the population to earthquake and related risks (tsunami, liquefaction, fires, etc.), and (2) to increase the quality of earthquake science education, so as to attract talented students to geosciences. Our integrated program relies on activities tuned for different population groups who have different interests and abilities, namely young children, teenagers, young adults, and professionals.
Resumo:
Introduction 1.1 Occurrence of polycyclic aromatic hydrocarbons (PAH) in the environment Worldwide industrial and agricultural developments have released a large number of natural and synthetic hazardous compounds into the environment due to careless waste disposal, illegal waste dumping and accidental spills. As a result, there are numerous sites in the world that require cleanup of soils and groundwater. Polycyclic aromatic hydrocarbons (PAHs) are one of the major groups of these contaminants (Da Silva et al., 2003). PAHs constitute a diverse class of organic compounds consisting of two or more aromatic rings with various structural configurations (Prabhu and Phale, 2003). Being a derivative of benzene, PAHs are thermodynamically stable. In addition, these chemicals tend to adhere to particle surfaces, such as soils, because of their low water solubility and strong hydrophobicity, and this results in greater persistence under natural conditions. This persistence coupled with their potential carcinogenicity makes PAHs problematic environmental contaminants (Cerniglia, 1992; Sutherland, 1992). PAHs are widely found in high concentrations at many industrial sites, particularly those associated with petroleum, gas production and wood preserving industries (Wilson and Jones, 1993). 1.2 Remediation technologies Conventional techniques used for the remediation of soil polluted with organic contaminants include excavation of the contaminated soil and disposal to a landfill or capping - containment - of the contaminated areas of a site. These methods have some drawbacks. The first method simply moves the contamination elsewhere and may create significant risks in the excavation, handling and transport of hazardous material. Additionally, it is very difficult and increasingly expensive to find new landfill sites for the final disposal of the material. The cap and containment method is only an interim solution since the contamination remains on site, requiring monitoring and maintenance of the isolation barriers long into the future, with all the associated costs and potential liability. A better approach than these traditional methods is to completely destroy the pollutants, if possible, or transform them into harmless substances. Some technologies that have been used are high-temperature incineration and various types of chemical decomposition (for example, base-catalyzed dechlorination, UV oxidation). However, these methods have significant disadvantages, principally their technological complexity, high cost , and the lack of public acceptance. Bioremediation, on the contrast, is a promising option for the complete removal and destruction of contaminants. 1.3 Bioremediation of PAH contaminated soil & groundwater Bioremediation is the use of living organisms, primarily microorganisms, to degrade or detoxify hazardous wastes into harmless substances such as carbon dioxide, water and cell biomass Most PAHs are biodegradable unter natural conditions (Da Silva et al., 2003; Meysami and Baheri, 2003) and bioremediation for cleanup of PAH wastes has been extensively studied at both laboratory and commercial levels- It has been implemented at a number of contaminated sites, including the cleanup of the Exxon Valdez oil spill in Prince William Sound, Alaska in 1989, the Mega Borg spill off the Texas coast in 1990 and the Burgan Oil Field, Kuwait in 1994 (Purwaningsih, 2002). Different strategies for PAH bioremediation, such as in situ , ex situ or on site bioremediation were developed in recent years. In situ bioremediation is a technique that is applied to soil and groundwater at the site without removing the contaminated soil or groundwater, based on the provision of optimum conditions for microbiological contaminant breakdown.. Ex situ bioremediation of PAHs, on the other hand, is a technique applied to soil and groundwater which has been removed from the site via excavation (soil) or pumping (water). Hazardous contaminants are converted in controlled bioreactors into harmless compounds in an efficient manner. 1.4 Bioavailability of PAH in the subsurface Frequently, PAH contamination in the environment is occurs as contaminants that are sorbed onto soilparticles rather than in phase (NAPL, non aqueous phase liquids). It is known that the biodegradation rate of most PAHs sorbed onto soil is far lower than rates measured in solution cultures of microorganisms with pure solid pollutants (Alexander and Scow, 1989; Hamaker, 1972). It is generally believed that only that fraction of PAHs dissolved in the solution can be metabolized by microorganisms in soil. The amount of contaminant that can be readily taken up and degraded by microorganisms is defined as bioavailability (Bosma et al., 1997; Maier, 2000). Two phenomena have been suggested to cause the low bioavailability of PAHs in soil (Danielsson, 2000). The first one is strong adsorption of the contaminants to the soil constituents which then leads to very slow release rates of contaminants to the aqueous phase. Sorption is often well correlated with soil organic matter content (Means, 1980) and significantly reduces biodegradation (Manilal and Alexander, 1991). The second phenomenon is slow mass transfer of pollutants, such as pore diffusion in the soil aggregates or diffusion in the organic matter in the soil. The complex set of these physical, chemical and biological processes is schematically illustrated in Figure 1. As shown in Figure 1, biodegradation processes are taking place in the soil solution while diffusion processes occur in the narrow pores in and between soil aggregates (Danielsson, 2000). Seemingly contradictory studies can be found in the literature that indicate the rate and final extent of metabolism may be either lower or higher for sorbed PAHs by soil than those for pure PAHs (Van Loosdrecht et al., 1990). These contrasting results demonstrate that the bioavailability of organic contaminants sorbed onto soil is far from being well understood. Besides bioavailability, there are several other factors influencing the rate and extent of biodegradation of PAHs in soil including microbial population characteristics, physical and chemical properties of PAHs and environmental factors (temperature, moisture, pH, degree of contamination). Figure 1: Schematic diagram showing possible rate-limiting processes during bioremediation of hydrophobic organic contaminants in a contaminated soil-water system (not to scale) (Danielsson, 2000). 1.5 Increasing the bioavailability of PAH in soil Attempts to improve the biodegradation of PAHs in soil by increasing their bioavailability include the use of surfactants , solvents or solubility enhancers.. However, introduction of synthetic surfactant may result in the addition of one more pollutant. (Wang and Brusseau, 1993).A study conducted by Mulder et al. showed that the introduction of hydropropyl-ß-cyclodextrin (HPCD), a well-known PAH solubility enhancer, significantly increased the solubilization of PAHs although it did not improve the biodegradation rate of PAHs (Mulder et al., 1998), indicating that further research is required in order to develop a feasible and efficient remediation method. Enhancing the extent of PAHs mass transfer from the soil phase to the liquid might prove an efficient and environmentally low-risk alternative way of addressing the problem of slow PAH biodegradation in soil.
Resumo:
The intestinal ecosystem is formed by a complex, yet highly characteristic microbial community. The parameters defining whether this community permits invasion of a new bacterial species are unclear. In particular, inhibition of enteropathogen infection by the gut microbiota ( = colonization resistance) is poorly understood. To analyze the mechanisms of microbiota-mediated protection from Salmonella enterica induced enterocolitis, we used a mouse infection model and large scale high-throughput pyrosequencing. In contrast to conventional mice (CON), mice with a gut microbiota of low complexity (LCM) were highly susceptible to S. enterica induced colonization and enterocolitis. Colonization resistance was partially restored in LCM-animals by co-housing with conventional mice for 21 days (LCM(con21)). 16S rRNA sequence analysis comparing LCM, LCM(con21) and CON gut microbiota revealed that gut microbiota complexity increased upon conventionalization and correlated with increased resistance to S. enterica infection. Comparative microbiota analysis of mice with varying degrees of colonization resistance allowed us to identify intestinal ecosystem characteristics associated with susceptibility to S. enterica infection. Moreover, this system enabled us to gain further insights into the general principles of gut ecosystem invasion by non-pathogenic, commensal bacteria. Mice harboring high commensal E. coli densities were more susceptible to S. enterica induced gut inflammation. Similarly, mice with high titers of Lactobacilli were more efficiently colonized by a commensal Lactobacillus reuteri(RR) strain after oral inoculation. Upon examination of 16S rRNA sequence data from 9 CON mice we found that closely related phylotypes generally display significantly correlated abundances (co-occurrence), more so than distantly related phylotypes. Thus, in essence, the presence of closely related species can increase the chance of invasion of newly incoming species into the gut ecosystem. We provide evidence that this principle might be of general validity for invasion of bacteria in preformed gut ecosystems. This might be of relevance for human enteropathogen infections as well as therapeutic use of probiotic commensal bacteria.