959 resultados para Semi-empirical
Resumo:
Biogeochemical and hydrological cycles are currently studied on a small experimental forested watershed (4.5 km(2)) in the semi-humid South India. This paper presents one of the first data referring to the distribution and dynamics of a widespread red soil (Ferralsols and Chromic Luvisols) and black soil (Vertisols and Vertic intergrades) cover, and its possible relationship with the recent development of the erosion process. The soil map was established from the observation of isolated soil profiles and toposequences, and surveys of soil electromagnetic conductivity (EM31, Geonics Ltd), lithology and vegetation. The distribution of the different parts of the soil cover in relation to each other was used to establish the dynamics and chronological order of formation. Results indicate that both topography and lithology (gneiss and amphibolite) have influenced the distribution of the soils. At the downslope, the following parts of the soil covers were distinguished: i) red soil system, ii) black soil system, iii) bleached horizon at the top of the black soil and iv) bleached sandy saprolite at the base of the black soil. The red soil is currently transforming into black soil and the transformation front is moving upslope. In the bottom part of the slope, the chronology appears to be the following: black soil > bleached horizon at the top of the black soil > streambed > bleached horizon below the black soil. It appears that the development of the drainage network is a recent process, which was guided by the presence of thin black soil with a vertic horizon less than 2 in deep. Three distinctive types of erosional landforms have been identified: 1. rotational slips (Type 1); 2. a seepage erosion (Type 2) at the top of the black soil profile; 3. A combination of earthflow and sliding in the non-cohesive saprolite of the gneiss occurs at midslope (Type 3). Types 1 and 2 erosion are mainly occurring downslope and are always located at the intersection between the streambed and the red soil-black soil contact. Neutron probe monitoring, along an area vulnerable to erosion types 1 and 2, indicates that rotational slips are caused by a temporary watertable at the base of the black soil and within the sandy bleached saprolite, which behaves as a plane of weakness. The watertable is induced by the ephemeral watercourse. Erosion type 2 is caused by seepage of a perched watertable, which occurs after swelling and closing of the cracks of the vertic clay horizon and within a light textured and bleached horizon at the top of black soil. Type 3 erosion is not related to the red soil-black soil system but is caused by the seasonal seepage of saturated throughflow in the sandy saprolite of the gneiss occurring at midslope. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Purpose The research purpose was to identify both the inspiration sources used by fast fashion designers and ways the designers sort information from the sources during the product development process. Design/methodology/approach This is a qualitative study, drawing on semi-structured interviews conducted with the members of the in-house design teams of three Australian fast fashion companies. Findings Australian fast fashion designers rely on a combination of trend data, sales data, product analysis and travel for design development ideas. The designers then use the consensus and embodiment methods to interpret and synthesise information from those inspiration sources. Research limitations/implications The empirical data used in the analysis were limited by interviewing fashion designers within only three Australian companies. Originality/value This research augments knowledge of fast fashion product development, in particular designers’ methods and approaches to product design within a volatile and competitive market.
Resumo:
Supercritical processes are gaining importance in the last few years in the food, environmental and pharmaceutical product processing. The design of any supercritical process needs accurate experimental data on solubilities of solids in the supercritical fluids (SCFs). The empirical equations are quite successful in correlating the solubilities of solid compounds in SCF both in the presence and absence of cosolvents. In this work, existing solvate complex models are discussed and a new set of empirical equations is proposed. These equations correlate the solubilities of solids in supercritical carbon dioxide (both in the presence and absence of cosolvents) as a function of temperature, density of supercritical carbon dioxide and the mole fraction of cosolvent. The accuracy of the proposed models was evaluated by correlating 15 binary and 18 ternary systems. The proposed models provided the best overall correlations. (C) 2009 Elsevier BA/. All rights reserved.
Resumo:
Diffusion in a composite slab consisting of a large number of layers provides an ideal prototype problem for developing and analysing two-scale modelling approaches for heterogeneous media. Numerous analytical techniques have been proposed for solving the transient diffusion equation in a one-dimensional composite slab consisting of an arbitrary number of layers. Most of these approaches, however, require the solution of a complex transcendental equation arising from a matrix determinant for the eigenvalues that is difficult to solve numerically for a large number of layers. To overcome this issue, in this paper, we present a semi-analytical method based on the Laplace transform and an orthogonal eigenfunction expansion. The proposed approach uses eigenvalues local to each layer that can be obtained either explicitly, or by solving simple transcendental equations. The semi-analytical solution is applicable to both perfect and imperfect contact at the interfaces between adjacent layers and either Dirichlet, Neumann or Robin boundary conditions at the ends of the slab. The solution approach is verified for several test cases and is shown to work well for a large number of layers. The work is concluded with an application to macroscopic modelling where the solution of a fine-scale multilayered medium consisting of two hundred layers is compared against an “up-scaled” variant of the same problem involving only ten layers.
Resumo:
This paper proposes the use of empirical modeling techniques for building microarchitecture sensitive models for compiler optimizations. The models we build relate program performance to settings of compiler optimization flags, associated heuristics and key microarchitectural parameters. Unlike traditional analytical modeling methods, this relationship is learned entirely from data obtained by measuring performance at a small number of carefully selected compiler/microarchitecture configurations. We evaluate three different learning techniques in this context viz. linear regression, adaptive regression splines and radial basis function networks. We use the generated models to a) predict program performance at arbitrary compiler/microarchitecture configurations, b) quantify the significance of complex interactions between optimizations and the microarchitecture, and c) efficiently search for'optimal' settings of optimization flags and heuristics for any given microarchitectural configuration. Our evaluation using benchmarks from the SPEC CPU2000 suits suggests that accurate models (< 5% average error in prediction) can be generated using a reasonable number of simulations. We also find that using compiler settings prescribed by a model-based search can improve program performance by as much as 19% (with an average of 9.5%) over highly optimized binaries.
Resumo:
We share our experience in planning, designing and deploying a wireless sensor network of one square kilometre area. Environmental data such as soil moisture, temperature, barometric pressure, and relative humidity are collected in this area situated in the semi-arid region of Karnataka, India. It is a hope that information derived from this data will benefit the marginal farmer towards improving his farming practices. Soon after establishing the need for such a project, we begin by showing the big picture of such a data gathering network, the software architecture we have used, the range measurements needed for determining the sensor density, and the packaging issues that seem to play a crucial role in field deployments. Our field deployment experiences include designing with intermittent grid power, enhancing software tools to aid quicker and effective deployment, and flash memory corruption. The first results on data gathering look encouraging.
Resumo:
The aims of the thesis are (1) to present a systematic evaluation of generation and its relevance as a sociological concept, (2) to reflect on how generational consciousness, i.e. generation as an object of collective identification that has social significance, can emerge and take shape, (3) to analyze empirically the generational experiences and consciousness of one specific generation, namely Finnish baby boomers (b. 1945 1950). The thesis contributes to the discussion on the social (as distinct from its genealogical) meaning of the concept of generation, launched by Karl Mannheim s classic Das Problem der Generationen (1928), in which the central idea is that a certain group of people is bonded together by a shared experience and that this bonding can result in a distinct self-consciousness. The thesis is comprised of six original articles and an extensive summarizing chapter. In the empirical articles, the baby boomers are studied on the basis of nationally representative survey data (N = 2628) and narrative life-story interviews (N = 38). In the article that discusses the connection of generations and social movements, the analysis is based on the member survey of Attac Finland (N = 1096). Three main themes were clarified in the thesis. (1) In the social sense the concept of generation is a modern, problematic, and ultimately a political concept. It served the interests of the intellectuals who developed the concept in the early 20th century and provided them, as an alternative to the concept of social class, a new way of think about social change and progress. The concept of generation is always coupled with the concept of Zeitgeist or some other controversial way of defining what is essential, i.e. what creates generations, in a given culture. Thus generation is, as a product of definition and classification struggles, a contested concept. The concept also clearly implies elitist connotations; the idea of some kind of vanguard (the elite) that represents an entire generation by proclaiming itself as its spokesman automatically creates a counterpart, namely the others in the peer group who are thought to be represented (the masses). (2) Generational consciousness cannot emerge as a result of any kind of automatic process or endogenously; it must be made. There has to be somebody who represents the generation in order for that generation to exist in people s minds and as an object of identification; generational experiences and their meanings must be articulated. Hence, social generations are, in a fundamental manner, discursively constructed. The articulations of generational experiences (speeches, writings, manifests, labels etc.) can be called as the discursive dimension of social generations, and through this notion, how public discourse shapes people s generational consciousness can be seen. Another important element in the process is collective memory, as generational consciousness often takes form only retrospectively. (3) Finnish baby boomers are not a united or homogeneous generation but are divided into many smaller sections with specific generational experiences and consciousnesses. The content of the generational consciousness of the baby boomers is heavily politically charged. A salient dividing line inside the age group is formed by individual attitudes towards so-called 1960s radicalism. Identification with the 1960s generation functions today as a positive self-definition of a certain small leftist elite group, and the values and characteristics usually connected with the idea of the 1960s generation do not represent the whole age group. On the contrary, among some of the members of the baby boomers, the generational identification is still directed by the experience of how traditional values were disgraced in the 1960s. As objects of identification, the neutral term baby boomers and the charged 1960s generation are totally different things, and therefore they should not be used as synonyms. Although the significance of the group of the 1960s generation is often overestimated, they are however special with respect to generational consciousness because they have presented themselves as the voice of the entire generation. Their generational interpretations have spread through the media with the help of certain iconic images of the generation insomuch that 1960s radicalism has become an indirect generational experience for other parts of the baby boom cohort as well.
Resumo:
In the field of psychiatry semi-structured interview is one of the central tools in assessing the psychiatric state of a patient. In semi-structured interview the interviewer participates in the interaction both by the prepared interview questions and by his or her own, unstructured turns. It has been stated that in the context of psychiatric assessment interviewers' unstructured turns help to get focused information but simultaneously may weaken the reliability of the data. This study examines the practices by which semi-structured psychiatric interviews are conducted. The method for the study is conversation analysis, which is both a theory of interaction and a methodology for its empirical, detailed analysis. Using data from 80 video-recorded psychiatric interviews with 16 patients and five interviewers it describes in detail both the structured and unstructured interviewing practices. In the analysis also psychotherapeutic concepts are used to describe phenomena that are characteristic for therapeutic discourse. The data was received from the Helsinki Psychotherapy Study (HPS). HPS is a randomized clinical trial comparing the effectiveness of four forms of psychotherapy in the treatment of depressive and anxiety disorders. A total of 326 patients were randomly assigned to one of three treatment groups: solution-focused therapy, short-term psychodynamic psychotherapy, and long-term psychodynamic psychotherapy. The patients assigned to the long-term psychodynamic psychotherapy group and 41 patients self-selected for psychoanalysis were included in a quasi-experimental design. The primary outcome measures were depressive and anxiety symptoms, while secondary measures included work ability, need for treatment, personality functions, social functioning, and life style. Cost-effectiveness was determined. The data were collected from interviews, questionnaires, psychological tests, and public health registers. The follow-up interviews were conducted five times during a 5-year follow-up. The study shows that interviewers pose elaborated questions that are formulated in a friendly and sensitive way and that make relevant patients' long and story-like responses. When receiving patients' answers interviewers use a wide variety of different interviewing practices by which they direct patients' talk or offer an understanding of the meaning of patients' response. The results of the study are two-fold. Firstly, the study shows that understanding the meaning of mental experiences requires interaction between interviewer and patient. It is stated that therefore semi-structured interview is both relevant and necessary method for collecting data in psychotherapy outcome study. Secondly, the study suggests that conversation analysis, enriched with psychotherapeutic concepts, offers methodological possibilities for psychotherapy process research, especially for process-outcome paradigm.
Resumo:
This thesis studies binary time series models and their applications in empirical macroeconomics and finance. In addition to previously suggested models, new dynamic extensions are proposed to the static probit model commonly used in the previous literature. In particular, we are interested in probit models with an autoregressive model structure. In Chapter 2, the main objective is to compare the predictive performance of the static and dynamic probit models in forecasting the U.S. and German business cycle recession periods. Financial variables, such as interest rates and stock market returns, are used as predictive variables. The empirical results suggest that the recession periods are predictable and dynamic probit models, especially models with the autoregressive structure, outperform the static model. Chapter 3 proposes a Lagrange Multiplier (LM) test for the usefulness of the autoregressive structure of the probit model. The finite sample properties of the LM test are considered with simulation experiments. Results indicate that the two alternative LM test statistics have reasonable size and power in large samples. In small samples, a parametric bootstrap method is suggested to obtain approximately correct size. In Chapter 4, the predictive power of dynamic probit models in predicting the direction of stock market returns are examined. The novel idea is to use recession forecast (see Chapter 2) as a predictor of the stock return sign. The evidence suggests that the signs of the U.S. excess stock returns over the risk-free return are predictable both in and out of sample. The new "error correction" probit model yields the best forecasts and it also outperforms other predictive models, such as ARMAX models, in terms of statistical and economic goodness-of-fit measures. Chapter 5 generalizes the analysis of univariate models considered in Chapters 2 4 to the case of a bivariate model. A new bivariate autoregressive probit model is applied to predict the current state of the U.S. business cycle and growth rate cycle periods. Evidence of predictability of both cycle indicators is obtained and the bivariate model is found to outperform the univariate models in terms of predictive power.
Resumo:
In this thesis a manifold learning method is applied to the problem of WLAN positioning and automatic radio map creation. Due to the nature of WLAN signal strength measurements, a signal map created from raw measurements results in non-linear distance relations between measurement points. These signal strength vectors reside in a high-dimensioned coordinate system. With the help of the so called Isomap-algorithm the dimensionality of this map can be reduced, and thus more easily processed. By embedding position-labeled strategic key points, we can automatically adjust the mapping to match the surveyed environment. The environment is thus learned in a semi-supervised way; gathering training points and embedding them in a two-dimensional manifold gives us a rough mapping of the measured environment. After a calibration phase, where the labeled key points in the training data are used to associate coordinates in the manifold representation with geographical locations, we can perform positioning using the adjusted map. This can be achieved through a traditional supervised learning process, which in our case is a simple nearest neighbors matching of a sampled signal strength vector. We deployed this system in two locations in the Kumpula campus in Helsinki, Finland. Results indicate that positioning based on the learned radio map can achieve good accuracy, especially in hallways or other areas in the environment where the WLAN signal is constrained by obstacles such as walls.
Resumo:
A semi-experimental approach to solve two-dimensional problems in elasticity is given. The method has been applied to two problems, (i) a square deep beam, and (ii) a bridge pier with a sloping boundary. For the first problem sufficient analytical results are available and hence the accuracy of the method can be verified. Then the method has been extended to the second problem for which sufficient results are not available.
Resumo:
Main chain and segmental dynamics of polyisoprene (PI) and poly(methyl methacrylate)(PMMA) chains in semi IPNs were systematically studied over a wide range of temperatures (above and below T-g of both polymers) as a function of composition, crosslink density, and molecular weight. The immiscible polymers retained most of its characteristic molecular motion; however, the semi IPN synthesis resulted in dramatic changes in the motional behavior of both polymers due to the molecular level interpenetration between two polymer chains. ESR spin probe method was found to be sensitive to the concentration changes of PMMA in semi IPNs. Low temperature spectra showed the characteristics of rigid limit spectra, and in the range of 293-373 K.complex spectra were obtained with the slow component mostly arisingout of the PMMA rich regions and fast component from the PI phase. We found that the rigid PMMA chains closely interpenetrated into thehighly mobile PI network imparts motional restriction in nearby PI chains, and the highly mobile PI chains induce some degree of flexibility in highly rigid PMMA chains. Molecular level interchain mixing was found to be more efficient at a PMMA concentration of 35 wt.%. Moreover, the strong interphase formed in the above mentionedsemi IPN contributed to the large slow component in the ESR spectra at higher temperature. The shape of the spectra along with the data obtained from the simulations of spectra was correlated to the morphology of the semi IPNs. The correlation time measurement detected the motional region associated with the glass transition of PI and PMMA, and these regions were found to follow the same pattern of shifts in a-relaxation of PI and PMMA observed in DMA analysis. Activation energies associated with the T-g regions were also calculated. T-50G was found to correlate with the T-g of PMMA, and the volume of polymer segments undergoing glass transitional motion was calculated to be 1.7 nm(3).C-13 T-1 rho measurements of PMMA carbons indicate that the molecular level interactions were strong in semi IPN irrespective of the immiscible nature of polymers. The motional characteristics of H atoms attached to carbon atoms in both polymers were analyzed using 2D WISE NMR. Main relaxations of both components shifted inward, and both SEM and TEM analysis showed the development of a nanometer sized morphology in the case of highly crosslinked semi IPN. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
A careful comparison of the distribution in the (R, θ)-plane of all NH ... O hydrogen bonds with that for bonds between neutral NH and neutral C=O groups indicated that the latter has a larger mean R and a wider range of θ and that the distribution was also broader than for the average case. Therefore, the potential function developed earlier for an average NH ... O hydrogen bond was modified to suit the peptide case. A three-parameter expression of the form {Mathematical expression}, with △ = R - Rmin, was found to be satisfactory. By comparing the theoretically expected distribution in R and θ with observed data (although limited), the best values were found to be p1 = 25, p3 = - 2 and q1 = 1 × 10-3, with Rmin = 2·95 Å and Vmin = - 4·5 kcal/mole. The procedure for obtaining a smooth transition from Vhb to the non-bonded potential Vnb for large R and θ is described, along with a flow chart useful for programming the formulae. Calculated values of ΔH, the enthalpy of formation of the hydrogen bond, using this function are in reasonable agreement with observation. When the atoms involved in the hydrogen bond occur in a five-membered ring as in the sequence[Figure not available: see fulltext.] a different formula for the potential function is needed, which is of the form Vhb = Vmin +p1△2 +q1x2 where x = θ - 50° for θ ≥ 50°, with p1 = 15, q1 = 0·002, Rmin = 2· Å and Vmin = - 2·5 kcal/mole. © 1971 Indian Academy of Sciences.