942 resultados para Phenomena
Resumo:
In order to have access to chiral gels, a series of salts derived from (1R, 3S)-(+)-camphoric acid and various secondary amines were prepared based on supramolecular synthon rationale. Out of seven salts prepared, two showed moderate gelation abilities. The gels were characterized by differential scanning calorimetry, table top rheology, scanning electron microscopy, single crystal and powder X-ray diffraction. Structure property correlation based on X-ray diffraction techniques remain inconclusive indicating that some of the integrated part associated with the gelation phenomena requires a better understanding.
Resumo:
In the thesis I study various quantum coherence phenomena and create some of the foundations for a systematic coherence theory. So far, the approach to quantum coherence in science has been purely phenomenological. In my thesis I try to answer the question what quantum coherence is and how it should be approached within the framework of physics, the metatheory of physics and the terminology related to them. It is worth noticing that quantum coherence is a conserved quantity that can be exactly defined. I propose a way to define quantum coherence mathematically from the density matrix of the system. Degenerate quantum gases, i.e., Bose condensates and ultracold Fermi systems, form a good laboratory to study coherence, since their entropy is small and coherence is large, and thus they possess strong coherence phenomena. Concerning coherence phenomena in degenerate quantum gases, I concentrate in my thesis mainly on collective association from atoms to molecules, Rabi oscillations and decoherence. It appears that collective association and oscillations do not depend on the spin-statistics of particles. Moreover, I study the logical features of decoherence in closed systems via a simple spin-model. I argue that decoherence is a valid concept also in systems with a possibility to experience recoherence, i.e., Poincaré recurrences. Metatheoretically this is a remarkable result, since it justifies quantum cosmology: to study the whole universe (i.e., physical reality) purely quantum physically is meaningful and valid science, in which decoherence explains why the quantum physical universe appears to cosmologists and other scientists very classical-like. The study of the logical structure of closed systems also reveals that complex enough closed (physical) systems obey a principle that is similar to Gödel's incompleteness theorem of logic. According to the theorem it is impossible to describe completely a closed system within the system, and the inside and outside descriptions of the system can be remarkably different. Via understanding this feature it may be possible to comprehend coarse-graining better and to define uniquely the mutual entanglement of quantum systems.
Analytical prediction of break-out noise from a reactive rectangular plenum with four flexible walls
Resumo:
This paper describes an analytical calculation of break-out noise from a rectangular plenum with four flexible walls by incorporating three-dimensional effects along with the acoustical and structural wave coupling phenomena. The breakout noise from rectangular plenums is important and the coupling between acoustic waves within the plenum and structural waves in the flexible plenum walls plays a critical role in prediction of the transverse transmission loss. The first step in breakout noise prediction is to calculate the inside plenum pressure field and the normal flexible plenum wall vibration by using an impedance-mobility approach, which results in a compact matrix formulation. In the impedance-mobility compact matrix (IMCM) approach, it is presumed that the coupled response can be described in terms of finite sets of the uncoupled acoustic subsystem and the structural subsystem. The flexible walls of the plenum are modeled as an unfolded plate to calculate natural frequencies and mode shapes of the uncoupled structural subsystem. The second step is to calculate the radiated sound power from the flexible walls using Kirchhoff-Helmholtz (KH) integral formulation. Analytical results are validated with finite element and boundary element (FEM-BEM) numerical models. (C) 2010 Acoustical Society of America. DOI: 10.1121/1.3463801]
Resumo:
The discovery of graphene has aroused great interest in the properties and phenomena exhibited by two-dimensional inorganic materials, especially when they comprise only a single, two or a few layers. Graphene-like MoS2 and WS2 have been prepared by chemical methods, and the materials have been characterized by electron microscopy, atomic force microscopy (AFM) and other methods. Boron nitride analogues of graphene have been obtained by a simple chemical procedure starting with boric acid and urea and have been characterized by various techniques that include surface area measurements. A new layered material with the composition BCN possessing a few layers and a large surface area discovered recently exhibits a large uptake of CO2.
Resumo:
Modern elementary particle physics is based on quantum field theories. Currently, our understanding is that, on the one hand, the smallest structures of matter and, on the other hand, the composition of the universe are based on quantum field theories which present the observable phenomena by describing particles as vibrations of the fields. The Standard Model of particle physics is a quantum field theory describing the electromagnetic, weak, and strong interactions in terms of a gauge field theory. However, it is believed that the Standard Model describes physics properly only up to a certain energy scale. This scale cannot be much larger than the so-called electroweak scale, i.e., the masses of the gauge fields W^+- and Z^0. Beyond this scale, the Standard Model has to be modified. In this dissertation, supersymmetric theories are used to tackle the problems of the Standard Model. For example, the quadratic divergences, which plague the Higgs boson mass in the Standard model, cancel in supersymmetric theories. Experimental facts concerning the neutrino sector indicate that the lepton number is violated in Nature. On the other hand, the lepton number violating Majorana neutrino masses can induce sneutrino-antisneutrino oscillations in any supersymmetric model. In this dissertation, I present some viable signals for detecting the sneutrino-antisneutrino oscillation at colliders. At the e-gamma collider (at the International Linear Collider), the numbers of the electron-sneutrino-antisneutrino oscillation signal events are quite high, and the backgrounds are quite small. A similar study for the LHC shows that, even though there are several backrounds, the sneutrino-antisneutrino oscillations can be detected. A useful asymmetry observable is introduced and studied. Usually, the oscillation probability formula where the sneutrinos are produced at rest is used. However, here, we study a general oscillation probability. The Lorentz factor and the distance at which the measurement is made inside the detector can have effects, especially when the sneutrino decay width is very small. These effects are demonstrated for a certain scenario at the LHC.
Resumo:
Finland witnessed a surge in crime news reporting during the 1990s. At the same time, there was a significant rise in the levels of fear of crime reported by surveys. This research examines whether and how the two phenomena: news media and fear of violence were associated with each other. The dissertation consists of five sub-studies and a summary article. The first sub-study is a review of crime reporting trends in Finland, in which I have reviewed prior research and used existing Finnish datasets on media contents and crime news media exposure. The second study examines the association between crime media consumption and fear of crime when personal and vicarious victimization experiences have been held constant. Apart from analyzing the impact of crime news consumption on fear, media effects on general social trust are analyzed in the third sub-study. In the fourth sub-study I have analyzed the contents of the Finnish Poliisi-TV programme and compared the consistency of the picture of violent crime between official data sources and the programme. In the fifth and final sub-study, the victim narratives of Poliisi-TV s violence news contents have been analyzed. The research provides a series of results which are unprecedented in Finland. First, it observes that as in many other countries, the quantity of crime news supply has increased quite markedly in Finland. Second, it verifies that exposure to crime news is related to being worried about violent victimization and avoidance behaviour. Third, it documents that exposure to TV crime reality-programming is associated with reduced social trust among Finnish adolescents. Fourth, the analysis of Poliisi-TV shows that it transmits a distorted view of crime when contrasted with primary data sources on crime, but that this distortion is not as big as could be expected from international research findings and epochal theories of sociology. Fifth, the portrayals of violence victims in Poliisi-TV do not fit the traditional ideal types of victims that are usually seen to dominate crime media. The fact that the victims of violence in Poliisi-TV are ordinary people represents a wider development of the changing significance of the crime victim in Finland. The research concludes that although the media most likely did have an effect on the rising public fears in the 1990s, the mechanism was not as straight forward as has often been claimed. It is likely that there are other factors in the fear-media equation that are affecting both fear levels and crime reporting and that these factors are interactive in nature. Finally, the research calls for a re-orientation of media criminology and suggests more emphasis on the positive implications of crime in the media. Keywords: crime, media, fear of crime, violence, victimization, news
Resumo:
An attempt to systematically investigate the effects of microstructural parameters in influencing the resistance to fatigue crack growth (FCG) in the near-threshold region under three different temper levels has been made for a high strength low alloy steel to observe in general, widely different trends in the dependence of both the total threshold stress intensity range, DELTA-K(th) and the intrinsic or effective threshold stress intensity range, DELTA-K(eff-th) on the prior austenitic grain size (PAGS). While a low strain hardening microstructure obtained by tempering at high temperatures exhibited strong dependence of DELTA-K(th) on the PAGS by virtue of strong interactions of crack tip slip with the grain boundary, a high strength, high strain hardening microstructure as a result of tempering at low temperature exhibited a weak dependence. The lack of a systematic variation of the near-threshold parameters with respect to grain size in temper embrittled structures appears to be related to the wide variations in the amount of intergranular fracture near threshold. Crack closure, to some extent provides a basis on which the increases in DELTA-K(th) at larger grain sizes can be rationalised. This study, in addition, provides a wide perspective on the relative roles of slip behaviour embrittlement and environment that result in the different trends observed in the grain size dependence of near-threshold fatigue parameters, based on which the inconsistency in the results reported in the literature can be clearly understood. Assessment of fracture modes through extensive fractography revealed that prior austenitic grain boundaries are effective barriers to cyclic crack growth compared to martensitic packet boundaries, especially at low stress intensities. Fracture morphologies comprising of low energy flat transgranular fracture can occur close to threshold depending on the combinations of strain hardening behaviour, yield strength and embrittlement effects. A detailed consideration is given to the discussion of cyclic stress strain behaviour, embrittlement and environmental effects and the implications of these phenomena on the crack growth behaviour near threshold.
Resumo:
Relying on Merleau-Ponty's phenomenology of perception and on Mircea Eliade's works on the Sacred and the Profane, this study explores the river as a perceptual space and as the sacred Center in a cosmic vision of the world in twelve of Jean-Marie Gustave Le Clézio's fictional works, from The Interrogation (1963) to Revolutions (2003). In the first chapter, after introducing the field of study, I discuss the relation between the radical subjectivity and the evasiveness of perceiving subjects in Le Clézio's fiction. Next are some thoughts on the relation between Merleau-Ponty's and Le Clézio's ideas. The second chapter studies the river as an experience in the text, first as a topographical space, then as a sound world. The investigations move on to its water as a visual and a tactile phenomenon. Then follows the human use of the river, the (absence of) baths, and the river as a traveling space. The chapter closes with the study of the metaphorical use of the word, occurring mainly in urban space and for phenomena in the sky. The third chapter is organized around the river as the Center of the world in a religious cosmogony, where the river represents the origin of the world and of the human race. The core analysis shows how the middle of the river is a symbolic space of a new beginning. As a sacred space, the river abolishes time as the object of contemplation and as relative immobility from the point of view of a person drifting downstream. The functions of a new beginning and of abolition of time are combined in the symbolic immersions in the water. Finally, the dissertation explores other symbolical spaces, such as the unknown destination of the drift, and the river as the Center of a utopia. The chapter closes with the existential agony as a result of the elimination of the Center in the urban environment. In the final chapter, the river is compared to other watercourses : the creek, the brook and the rapids. The river is more of a spatial entity, whereas the actual water is more important in the smaller watercourses. The river is more common than the other watercourses as a topographical element in the landscape, whereas the minor watercourses invite the characters to a closer contact with their element, in immersions and in drinking their water. Finally, the work situates the rivers in a broader context of different fictional spaces in Le Clézio's text.
Resumo:
An experimental setup using radiative heating has been used to understand the thermo-physical phenomena and chemical transformations inside acoustically levitated cerium nitrate precursor droplets. In this transformation process, through infrared thermography and high speed imaging, events such as vaporization, precipitation and chemical reaction have been recorded at high temporal resolution, leading to nanoceria formation with a porous morphology. The cerium nitrate droplet undergoes phase and shape changes throughout the vaporization process. Four distinct stages were delineated during the entire vaporization process namely pure evaporation, evaporation with precipitate formation, chemical reaction with phase change and formation of final porous precipitate. The composition was examined using scanning and transmission electron microscopy that revealed nanostructures and confirmed highly porous morphology with trapped gas pockets. Transmission electron microscopy (TEM) and high speed imaging of the final precipitate revealed the presence of trapped gases in the form of bubbles. TEM also showed the presence of nanoceria crystalline structures at 70 degrees C. The current study also looked into the effect of different heating powers on the process. At higher power, each phase is sustained for smaller duration and higher maximum temperature. In addition, the porosity of the final precipitate increased with power. A non-dimensional time scale is proposed to correlate the effect of laser intensity and vaporization rate of the solvent (water). The effect of acoustic levitation was also studied. Due to acoustic streaming, the solute selectively gets transported to the bottom portion of the droplet due to strong circulation, providing it rigidity and allows it become bowl shaped. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In meteorology, observations and forecasts of a wide range of phenomena for example, snow, clouds, hail, fog, and tornados can be categorical, that is, they can only have discrete values (e.g., "snow" and "no snow"). Concentrating on satellite-based snow and cloud analyses, this thesis explores methods that have been developed for evaluation of categorical products and analyses. Different algorithms for satellite products generate different results; sometimes the differences are subtle, sometimes all too visible. In addition to differences between algorithms, the satellite products are influenced by physical processes and conditions, such as diurnal and seasonal variation in solar radiation, topography, and land use. The analysis of satellite-based snow cover analyses from NOAA, NASA, and EUMETSAT, and snow analyses for numerical weather prediction models from FMI and ECMWF was complicated by the fact that we did not have the true knowledge of snow extent, and we were forced simply to measure the agreement between different products. The Sammon mapping, a multidimensional scaling method, was then used to visualize the differences between different products. The trustworthiness of the results for cloud analyses [EUMETSAT Meteorological Products Extraction Facility cloud mask (MPEF), together with the Nowcasting Satellite Application Facility (SAFNWC) cloud masks provided by Météo-France (SAFNWC/MSG) and the Swedish Meteorological and Hydrological Institute (SAFNWC/PPS)] compared with ceilometers of the Helsinki Testbed was estimated by constructing confidence intervals (CIs). Bootstrapping, a statistical resampling method, was used to construct CIs, especially in the presence of spatial and temporal correlation. The reference data for validation are constantly in short supply. In general, the needs of a particular project drive the requirements for evaluation, for example, for the accuracy and the timeliness of the particular data and methods. In this vein, we discuss tentatively how data provided by general public, e.g., photos shared on the Internet photo-sharing service Flickr, can be used as a new source for validation. Results show that they are of reasonable quality and their use for case studies can be warmly recommended. Last, the use of cluster analysis on meteorological in-situ measurements was explored. The Autoclass algorithm was used to construct compact representations of synoptic conditions of fog at Finnish airports.
Resumo:
The low frequency dielectric behavior of castor oil (a vegetable oil) has been analyzed quite exhaustively in the context of its application as impregnant in capacitors. For the sake of completeness and in order to understand the relaxation phenomena in this liquid dielectric, this high frequency dielectric study was undertaken. In order to compare its properties with a liquid dielectric used in similar application and whose high frequency behavior has been quite well analyzed, Arochlor 1476 was studied. It is observed that both liquids have distributed relaxation times. The distribution parameters together with the two distinct relaxation times have been calculated by measuring the average relaxation time. It has been found that the distinct relaxation times thus calculated represent the dielectric behavior quite satisfactorily. The average dipole moments, dipole radii and thermal activation energies for dipole relaxation have also been evaluated.
Resumo:
Thunderstorm is a dangerous electrical phenomena in the atmosphere. Thundercloud is formed when thermal energy is transported rapidly upwards in convective updraughts. Electrification occurs in the collisions of cloud particles in the strong updraught. When the amount of charge in the cloud is large enough, electrical breakdown, better known as a flash, occurs. Lightning location is nowadays an essential tool for the detection of severe weather. Located flashes indicate in real time the movement of hazardous areas and the intensity of lightning activity. Also, an estimate for the flash peak current can be determined. The observations can be used in damage surveys. The most simple way to represent lightning data is to plot the locations on a map, but the data can be processed in more complex end-products and exploited in data fusion. Lightning data serves as an important tool also in the research of lightning-related phenomena, such as Transient Luminous Events. Most of the global thunderstorms occur in areas with plenty of heat, moisture and tropospheric instability, for example in the tropical land areas. In higher latitudes like in Finland, the thunderstorm season is practically restricted to the summer season. Particular feature of the high-latitude climatology is the large annual variation, which regards also thunderstorms. Knowing the performance of any measuring device is important because it affects the accuracy of the end-products. In lightning location systems, the detection efficiency means the ratio between located and actually occurred flashes. Because in practice it is impossible to know the true number of actually occurred flashes, the detection efficiency has to be esimated with theoretical methods.
Resumo:
Seepage through sand bed channels in a downward direction (suction) reduces the stability of particles and initiates the sand movement. Incipient motion of sand bed channel with seepage cannot be designed by using the conventional approach. Metamodeling techniques, which employ a non-linear pattern analysis between input and output parameters and solely based on the experimental observations, can be used to model such phenomena. Traditional approach to find non-dimensional parameters has not been used in the present work. Parameters, which can influence the incipient motion with seepage, have been identified and non-dimensionalized in the present work. Non-dimensional stream power concept has been used to describe the process. By using these non-dimensional parameters; present work describes a radial basis function (RBF) metamodel for prediction of incipient motion condition affected by seepage. The coefficient of determination, R-2 of the model is 0.99. Thus, it can be said that model predicts the phenomena very well. With the help of the metamodel, design curves have been presented for designing the sand bed channel when it is affected by seepage. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The aim of this study was to examine the applicability of the Phonological Mean Length of Utterance (pMLU) method to the data of children acquiring Finnish, for both typically developing children and children with a Specific Language Impairment (SLI). Study I examined typically developing children at the end of the one-word stage (N=17, mean age 1;8), and Study II analysed children s (N=5) productions in a follow-up study with four assessment points (ages 2;0, 2;6, 3;0, 3;6). Study III was carried out in the form of a review article that examined recent research on the phonological development of children acquiring Finnish and compared the results with general trends and cross-linguistic findings in phonological development. Study IV included children with SLI (N=4, mean age 4;10) and age-matched peers. The analyses in Studies I, II and IV were made using the quantitative pMLU method. In the pMLU method, pMLU values are counted for both the words that the children targeted (so-called target words) and the words produced by the children. When the child s average pMLU value was divided with the average target word pMLU value, it is possible to examine that child s accuracy in producing the words with the Whole-Word Proximity (PWP) value. In addition, the number of entirely correctly produced words is counted to obtain the Whole-Word Correctness (PWC) value. Qualitative analyses were carried out in order to examine how the children s phoneme inventories and deficiencies in phonotactics would explain the observed pMLU, PWP and PWC values. The results showed that the pMLU values for children acquiring Finnish were relatively high already at the end of the one-word stage (Study I). The values were found to reflect the characteristics of the ambient language. Typological features that lead to cross-linguistic differences in pMLU values were also observed in the review article (Study III), which noted that in the course of phonological acquisition there are a large number of language-specific phenomena and processes. Study II indicated that overall the children s phonological development during the follow-up period was reflected in the pMLU, PWP and PWC values, although the method showed limitations in detecting qualitative differences between the children. Correct vowels were not scored in the pMLU counts, which led to some misleadingly high pMLU and PWP results: vowel errors were only reflected in the PWC values. Typically developing children in Study II reached the highest possible pMLU results already around age 3;6. At the same time, the differences between the children with SLI and age-matched peers in the pMLU values were very prominent (Study IV). The values for the children with SLI were similar to the ones reported for two-year-old children. Qualitative analyses revealed that the phonologies of the children with SLI largely resembled the ones of younger, typically developing children. However, unusual errors were also witnessed (e.g., vowel errors, omissions of word-initial stops, consonants added to the initial position in words beginning with a vowel). This dissertation provides an application of a new tool for quantitative phonological assessment and analysis in children acquiring Finnish. The preliminary results suggest that, with some modifications, the pMLU method can be used to assess children s phonological development and that it has some advantages compared to the earlier, segment-oriented approaches. Qualitative analyses complemented the pMLU s observations on the children s phonologies. More research is needed in order to verify the levels of the pMLU, PWP and PWC values in children acquiring Finnish.
Resumo:
This thesis critically examines the patterns and processes of ethnic residential segregation in the Helsinki Metropolitan Area (HMA). These phenomena are examined in two main ways: a) between the native and immigrant populations and b) the extent to which different immigrant groups are sharing the same neighbourhoods. The main aim of the study is to test the extent to which the theoretical claims of the selective migration processes can explain the development of ethnic residential segregation in HMA. The data is mixed: it consists of neighbourhood-level statistics related to the migration, demography and housing stock. The selective migration flows are analysed within and between neighbourhood-types, defined on the basis of the percentages of foreign-language-speakers. For contextual purposes, the study also includes fifteen expert interviews who work within the housing sector. Firstly, the results show that, from the early 2000s the patterns of ethnic residential segregation have strengthened while the differences between neighbourhoods have grown. On a more general level the HMA can be divided into two main areas: some eastern and north-eastern neighbourhoods that have experienced the rise of immigrant concentrations and; the northern, north-western and southern parts of the HMA, where the number and percentages of immigrants have remained relatively low. However, within the eastern and north-eastern neighbourhoods there are also discernable internal differences that reflect the income levels of the inhabitants and the type of housing stock. The results also show that, the existing immigrant concentrations are ethnically and culturally mixed and thus qualitatively different from China town and Little-Italy enclaves of single groups of immigrants. Secondly, the results show that there are clear signs of the selective migration processes of the native and immigrant populations which have resulted in the discernable development of ethnic residential segregation. Migration flows of the native population have gravitated towards neighbourhoods, where the percentage of immigrants is below the HMA average. This has resulted in significant migration losses for neighbourhoods with established and developing concentrations of immigrants. Meanwhile, migration of immigrants has been drawn to neighbourhoods where their percentages are above the HMA average. However, the results also point to clear differences in the migration and spatial patterns of different immigrant groups. The spatial selectivity of migration is, thus, more prominent amongst the native population than when compared with immigrants. Overall, the results indicate that the reproduction of the selective migration flows of the native and immigrant populations will largely determine HMA s future development of ethnic residential segregation.