973 resultados para Theoretical prediction
Resumo:
A fatigue crack propagation model for concrete is proposed based on the concepts of fracture mechanics. This model takes into account the loading history, frequency of applied load, and size, effect parameters. Using this model, a method is described based on linear elastic fracture mechanics to assess the residual strength of cracked plain and reinforced concrete (RC) beams. This could be used to predict the residual strength (load carrying capacity) of cracked or damaged plain and reinforced concrete beams at a given level of damage. It has been seen that the fatigue crack propagation rate increases as. the size of plain concrete, beam increases indicating an increase in brittleness. In reinforced concrete (RC) beams, the fracture process becomes stable only when the beam is sufficiently reinforced.
Resumo:
Numerical models, used for atmospheric research, weather prediction and climate simulation, describe the state of the atmosphere over the heterogeneous surface of the Earth. Several fundamental properties of atmospheric models depend on orography, i.e. on the average elevation of land over a model area. The higher is the models' resolution, the more the details of orography directly influence the simulated atmospheric processes. This sets new requirements for the accuracy of the model formulations with respect to the spatially varying orography. Orography is always averaged, representing the surface elevation within the horizontal resolution of the model. In order to remove the smallest scales and steepest slopes, the continuous spectrum of orography is normally filtered (truncated) even more, typically beyond a few gridlengths of the model. This means, that in the numerical weather prediction (NWP) models, there will always be subgridscale orography effects, which cannot be explicitly resolved by numerical integration of the basic equations, but require parametrization. In the subgrid-scale, different physical processes contribute in different scales. The parametrized processes interact with the resolved-scale processes and with each other. This study contributes to building of a consistent, scale-dependent system of orography-related parametrizations for the High Resolution Limited Area Model (HIRLAM). The system comprises schemes for handling the effects of mesoscale (MSO) and small-scale (SSO) orographic effects on the simulated flow and a scheme of orographic effects on the surface-level radiation fluxes. Representation of orography, scale-dependencies of the simulated processes and interactions between the parametrized and resolved processes are discussed. From the high-resolution digital elevation data, orographic parameters are derived for both momentum and radiation flux parametrizations. Tools for diagnostics and validation are developed and presented. The parametrization schemes applied, developed and validated in this study, are currently being implemented into the reference version of HIRLAM.
Resumo:
The significance of treating rainfall as a chaotic system instead of a stochastic system for a better understanding of the underlying dynamics has been taken up by various studies recently. However, an important limitation of all these approaches is the dependence on a single method for identifying the chaotic nature and the parameters involved. Many of these approaches aim at only analyzing the chaotic nature and not its prediction. In the present study, an attempt is made to identify chaos using various techniques and prediction is also done by generating ensembles in order to quantify the uncertainty involved. Daily rainfall data of three regions with contrasting characteristics (mainly in the spatial area covered), Malaprabha, Mahanadi and All-India for the period 1955-2000 are used for the study. Auto-correlation and mutual information methods are used to determine the delay time for the phase space reconstruction. Optimum embedding dimension is determined using correlation dimension, false nearest neighbour algorithm and also nonlinear prediction methods. The low embedding dimensions obtained from these methods indicate the existence of low dimensional chaos in the three rainfall series. Correlation dimension method is done on th phase randomized and first derivative of the data series to check whether the saturation of the dimension is due to the inherent linear correlation structure or due to low dimensional dynamics. Positive Lyapunov exponents obtained prove the exponential divergence of the trajectories and hence the unpredictability. Surrogate data test is also done to further confirm the nonlinear structure of the rainfall series. A range of plausible parameters is used for generating an ensemble of predictions of rainfall for each year separately for the period 1996-2000 using the data till the preceding year. For analyzing the sensitiveness to initial conditions, predictions are done from two different months in a year viz., from the beginning of January and June. The reasonably good predictions obtained indicate the efficiency of the nonlinear prediction method for predicting the rainfall series. Also, the rank probability skill score and the rank histograms show that the ensembles generated are reliable with a good spread and skill. A comparison of results of the three regions indicates that although they are chaotic in nature, the spatial averaging over a large area can increase the dimension and improve the predictability, thus destroying the chaotic nature. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This thesis deals with theoretical modeling of the electrodynamics of auroral ionospheres. In the five research articles forming the main part of the thesis we have concentrated on two main themes: Development of new data-analysis techniques and study of inductive phenomena in the ionospheric electrodynamics. The introductory part of the thesis provides a background for these new results and places them in the wider context of ionospheric research. In this thesis we have developed a new tool (called 1D SECS) for analysing ground based magnetic measurements from a 1-dimensional magnetometer chain (usually aligned in the North-South direction) and a new method for obtaining ionospheric electric field from combined ground based magnetic measurements and estimated ionospheric electric conductance. Both these methods are based on earlier work, but contain important new features: 1D SECS respects the spherical geometry of large scale ionospheric electrojet systems and due to an innovative way of implementing boundary conditions the new method for obtaining electric fields can be applied also at local scale studies. These new calculation methods have been tested using both simulated and real data. The tests indicate that the new methods are more reliable than the previous techniques. Inductive phenomena are intimately related to temporal changes in electric currents. As the large scale ionospheric current systems change relatively slowly, in time scales of several minutes or hours, inductive effects are usually assumed to be negligible. However, during the past ten years, it has been realised that induction can play an important part in some ionospheric phenomena. In this thesis we have studied the role of inductive electric fields and currents in ionospheric electrodynamics. We have formulated the induction problem so that only ionospheric electric parameters are used in the calculations. This is in contrast to previous studies, which require knowledge of the magnetospheric-ionosphere coupling. We have applied our technique to several realistic models of typical auroral phenomena. The results indicate that inductive electric fields and currents are locally important during the most dynamical phenomena (like the westward travelling surge, WTS). In these situations induction may locally contribute up to 20-30% of the total ionospheric electric field and currents. Inductive phenomena do also change the field-aligned currents flowing between the ionosphere and magnetosphere, thus modifying the coupling between the two regions.
Resumo:
A class of conjugated molecules containing donor (thiophene) and acceptor (malononitrile) is synthesized by Knoevenagel condensation reaction between 2-(2,6-dimethy1-4H-pyran-4-ylidene) malononitrile and thiophene carbaldehyde containing two and three thiophene units. The resulting molecules are characterized by H-1 and C-13 NMR. We have performed UV-vis absorption, fluorescence, and cyclic voltammetry measurements on these materials. The spectroscopic and electrochemical measurements proved beyond doubt that these materials possess lowexcitation gap and are suitable for being an active material in various electronic devices. We have also performed electronic structure calculations using density functional theory (DFT) and INDO/SCI methods to characterize the ground and excited states of this class of molecules. These donor-acceptor molecules show a strong charge transfercharacter that increases with the increase in the number of thiophene rings coupled to the malononitrile acceptor moiety. We have also calculated the pi-coherence length, Stoke's shift, and effect of solvents on excited states for this class of molecules, Our theoretical values agree well with experimental results.
Resumo:
Nucleation is the first step of a first order phase transition. A new phase is always sprung up in nucleation phenomena. The two main categories of nucleation are homogeneous nucleation, where the new phase is formed in a uniform substance, and heterogeneous nucleation, when nucleation occurs on a pre-existing surface. In this thesis the main attention is paid on heterogeneous nucleation. This thesis wields the nucleation phenomena from two theoretical perspectives: the classical nucleation theory and the statistical mechanical approach. The formulation of the classical nucleation theory relies on equilibrium thermodynamics and use of macroscopically determined quantities to describe the properties of small nuclei, sometimes consisting of just a few molecules. The statistical mechanical approach is based on interactions between single molecules, and does not bear the same assumptions as the classical theory. This work gathers up the present theoretical knowledge of heterogeneous nucleation and utilizes it in computational model studies. A new exact molecular approach on heterogeneous nucleation was introduced and tested by Monte Carlo simulations. The results obtained from the molecular simulations were interpreted by means of the concepts of the classical nucleation theory. Numerical calculations were carried out for a variety of substances nucleating on different substances. The classical theory of heterogeneous nucleation was employed in calculations of one-component nucleation of water on newsprint paper, Teflon and cellulose film, and binary nucleation of water-n-propanol and water-sulphuric acid mixtures on silver nanoparticles. The results were compared with experimental results. The molecular simulation studies involved homogeneous nucleation of argon and heterogeneous nucleation of argon on a planar platinum surface. It was found out that the use of a microscopical contact angle as a fitting parameter in calculations based on the classical theory of heterogeneous nucleation leads to a fair agreement between the theoretical predictions and experimental results. In the presented cases the microscopical angle was found to be always smaller than the contact angle obtained from macroscopical measurements. Furthermore, molecular Monte Carlo simulations revealed that the concept of the geometrical contact parameter in heterogeneous nucleation calculations can work surprisingly well even for very small clusters.
Resumo:
Experimental charge density distributions in two known conformational polymorphs (orange and yellow) of coumarin 314 dye are analyzed based on multipole modeling of X-ray diffraction data collected at 100 K. The experimental results are compared with the charge densities derived from multipole modeling of theoretical structure factors obtained from periodic quantum calculation with density functional theory (DFT) method and B3LYP/6-31G(d,p) level of theory. The presence of disorder at the carbonyl oxygen atom of ethoxycarbonyl group in the yellow form, which was not identified earlier, is addressed here. The investigationof intermolecular interactions, based on Hirshfeld surface analysis and topological properties via quantum theory of atoms in molecule and total electrostatic interaction energies, revealed significant differences between the polymorphs. The differences of electrostatic nature in these two polymorphic forms were unveiled via construction of three-dimensional deformation electrostatic potential maps plotted over the molecular surfaces. The lattice energies evaluated from ab initio calculations on the two polymorphic forms indicate that the yellow form is likely to be the most favorable thermodynamically. The dipole moments derived from experimental and theoretical charge densities and also from Lorentz tensor approach are compared with the single-molecule dipole moments. In each case, the differences of dipole moments between the polymorphs are identified.
Resumo:
This study offers a reconstruction and critical evaluation of globalization theory, a perspective that has been central for sociology and cultural studies in recent decades, from the viewpoint of media and communications. As the study shows, sociological and cultural globalization theorists rely heavily on arguments concerning media and communications, especially the so-called new information and communication technologies, in the construction of their frameworks. Together with deepening the understanding of globalization theory, the study gives new critical knowledge of the problematic consequences that follow from such strong investment in media and communications in contemporary theory. The book is divided into four parts. The first part presents the research problem, the approach and the theoretical contexts of the study. Followed by the introduction in Chapter 1, I identify the core elements of globalization theory in Chapter 2. At the heart of globalization theory is the claim that recent decades have witnessed massive changes in the spatio-temporal constitution of society, caused by new media and communications in particular, and that these changes necessitate the rethinking of the foundations of social theory as a whole. Chapter 3 introduces three paradigms of media research the political economy of media, cultural studies and medium theory the discussion of which will make it easier to understand the key issues and controversies that emerge in academic globalization theorists treatment of media and communications. The next two parts offer a close reading of four theorists whose works I use as entry points into academic debates on globalization. I argue that we can make sense of mainstream positions on globalization by dividing them into two paradigms: on the one hand, media-technological explanations of globalization and, on the other, cultural globalization theory. As examples of the former, I discuss the works of Manuel Castells (Chapter 4) and Scott Lash (Chapter 5). I maintain that their analyses of globalization processes are overtly media-centric and result in an unhistorical and uncritical understanding of social power in an era of capitalist globalization. A related evaluation of the second paradigm (cultural globalization theory), as exemplified by Arjun Appadurai and John Tomlinson, is presented in Chapter 6. I argue that due to their rejection of the importance of nation states and the notion of cultural imperialism for cultural analysis, and their replacement with a framework of media-generated deterritorializations and flows, these theorists underplay the importance of the neoliberalization of cultures throughout the world. The fourth part (Chapter 7) presents a central research finding of this study, namely that the media-centrism of globalization theory can be understood in the context of the emergence of neoliberalism. I find it problematic that at the same time when capitalist dynamics have been strengthened in social and cultural life, advocates of globalization theory have directed attention to media-technological changes and their sweeping socio-cultural consequences, instead of analyzing the powerful material forces that shape the society and the culture. I further argue that this shift serves not only analytical but also utopian functions, that is, the longing for a better world in times when such longing is otherwise considered impracticable.
Resumo:
This study examines the properties of Generalised Regression (GREG) estimators for domain class frequencies and proportions. The family of GREG estimators forms the class of design-based model-assisted estimators. All GREG estimators utilise auxiliary information via modelling. The classic GREG estimator with a linear fixed effects assisting model (GREG-lin) is one example. But when estimating class frequencies, the study variable is binary or polytomous. Therefore logistic-type assisting models (e.g. logistic or probit model) should be preferred over the linear one. However, other GREG estimators than GREG-lin are rarely used, and knowledge about their properties is limited. This study examines the properties of L-GREG estimators, which are GREG estimators with fixed-effects logistic-type models. Three research questions are addressed. First, I study whether and when L-GREG estimators are more accurate than GREG-lin. Theoretical results and Monte Carlo experiments which cover both equal and unequal probability sampling designs and a wide variety of model formulations show that in standard situations, the difference between L-GREG and GREG-lin is small. But in the case of a strong assisting model, two interesting situations arise: if the domain sample size is reasonably large, L-GREG is more accurate than GREG-lin, and if the domain sample size is very small, estimation of assisting model parameters may be inaccurate, resulting in bias for L-GREG. Second, I study variance estimation for the L-GREG estimators. The standard variance estimator (S) for all GREG estimators resembles the Sen-Yates-Grundy variance estimator, but it is a double sum of prediction errors, not of the observed values of the study variable. Monte Carlo experiments show that S underestimates the variance of L-GREG especially if the domain sample size is minor, or if the assisting model is strong. Third, since the standard variance estimator S often fails for the L-GREG estimators, I propose a new augmented variance estimator (A). The difference between S and the new estimator A is that the latter takes into account the difference between the sample fit model and the census fit model. In Monte Carlo experiments, the new estimator A outperformed the standard estimator S in terms of bias, root mean square error and coverage rate. Thus the new estimator provides a good alternative to the standard estimator.
Resumo:
Asian elephants (Dephas maximus), prominent ``flagship species'', arelisted under the category of endangered species (EN - A2c, ver. 3.1, IUCN Red List 2009) and there is a need for their conservation This requires understanding demographic and reproductive dynamics of the species. Monitoring reproductive status of any species is traditionally being carried out through invasive blood sampling and this is restrictive for large animals such as wild or semi-captive elephants due to legal. ethical, and practical reasons Hence. there is a need for a non-invasive technique to assess reproductive cyclicity profiles of elephants. which will help in the species' conservation strategies In this study. we developed an indirect competitive enzyme linked immuno-sorbent assay (ELISA) to estimate the concentration of one of the progesterone-metabolites i.e, allopregnanolone (5 alpha-P-3OH) in fecal samples of As elephants We validated the assay which had a sensitivity of 0.25 mu M at 90% binding with an EC50 value of 1 37 mu M Using female elephants. kept under semi-captive conditions in the forest camps of Mudumalar Wildlife Sanctuary, Tamil Nadu and Bandipur National Park, Karnataka, India. we measured fecal progesterone-metabolite (5 alpha-P-3OH) concentrations in six an and showed their clear correlation with those of scrum progesterone measured by a standard radio-immuno assay. Statistical analyses using a Linear Mixed Effect model showed a positive correlation (P < 0 1) between the profiles of fecal 5 alpha-P-3OH (range 0 5-10 mu g/g) and serum progesterone (range: 0 1-1 8 ng/mL) Therefore, our studies show, for the first time, that the fecal progesterone-metabolite assay could be exploited to predict estrus cyclicity and to potentially assess the reproductive status of captive and free-ranging female Asian elephants, thereby helping to plan their breeding strategy (C) 2010 Elsevier Inc.All rights reserved.
Resumo:
"Trust and Collectives" is a compilation of articles: (I) "On Rational Trust" (in Meggle, G. (ed.) Social Facts & Collective Intentionality, Dr. Hänsel-Hohenhausen AG (currently Ontos), 2002), (II) "Simulating Rational Social Normative Trust, Predictive Trust, and Predictive Reliance Between Agents" (M.Tuomela and S. Hofmann, Ethics and Information Technology 5, 2003), (III) "A Collective's Trust in a Collective's action" (Protosociology, 18-19, 2003), and (IV) "Cooperation and Trust in Group Contexts" (R. Tuomela and M.Tuomela, Mind and Society 4/1, 2005 ). The articles are tied together by an introduction that dwells deeply on the topic of trust. (I) presents a somewhat general version of (RSNTR) and some basic arguments. (II) offers an application of (RSNTR) for a computer simulation of trust.(III) applies (RSNTR) to Raimo Tuomela's "we-mode"collectives (i.e. The Philosophy of Social Practices, Cambridge University Press, 2002). (IV) analyzes cooperation and trust in the context of acting as a member of a collective. Thus, (IV) elaborates on the topic of collective agency in (III) and puts the trust account (RSNTR) to work in a framework of cooperation. The central aim of this work is to construct a well-argued conceptual and theoretical account of rational trust, viz. a person's subjectively rational trust in another person vis-à-vis his performance of an action, seen from a first-person point of view. The main method is conceptual and theoretical analysis understood along the lines of reflective equilibrium. The account of rational social normative trust (RSNTR), which is argued and defended against other views, is the result of the quest. The introduction stands on its own legs as an argued presentation of an analysis of the concept of rational trust and an analysis of trust itself (RSNTR). It is claimed that (RSNTR) is "genuine" trust and embedded in a relationship of mutual respect for the rights of the other party. This relationship is the growing site for trust, a causal and conceptual ground, but it is not taken as a reason for trusting (viz. predictive "trust"). Relevant themes such as risk, decision, rationality, control, and cooperation are discussed and the topics of the articles are briefly presented. In this work it is argued that genuine trust is to be kept apart from predictive "trust." When we trust a person vis-à-vis his future action that concerns ourselves on the basis of his personal traits and/or features of the specific situation we have a prediction-like attitude. Genuine trust develops in a relationship of mutual respect for the mutual rights of the other party. Such a relationship is formed through interaction where the parties gradually find harmony concerning "the rules of the game." The trust account stands as a contribution to philosophical research on central social notions and it could be used as a theoretical model in social psychology, economical and political science where interaction between persons and groups are in focus. The analysis could also serve as a model for a trust component in computer simulation of human action. In the context of everyday life the account clarifies the difference between predictive "trust" and genuine trust. There are no fast shortcuts to trust. Experiences of mutual respect for mutual rights cannot be had unless there is respect.
Resumo:
Masonry strength is dependent upon characteristics of the masonry unit,the mortar and the bond between them. Empirical formulae as well as analytical and finite element (FE) models have been developed to predict structural behaviour of masonry. This paper is focused on developing a three dimensional non-linear FE model based on micro-modelling approach to predict masonry prism compressive strength and crack pattern. The proposed FE model uses multi-linear stress-strain relationships to model the non-linear behaviour of solid masonry unit and the mortar. Willam-Warnke's five parameter failure theory developed for modelling the tri-axial behaviour of concrete has been adopted to model the failure of masonry materials. The post failure regime has been modelled by applying orthotropic constitutive equations based on the smeared crack approach. Compressive strength of the masonry prism predicted by the proposed FE model has been compared with experimental values as well as the values predicted by other failure theories and Eurocode formula. The crack pattern predicted by the FE model shows vertical splitting cracks in the prism. The FE model predicts the ultimate failure compressive stress close to 85 of the mean experimental compressive strength value.
Resumo:
The purpose of this study is to examine how transformation is defining feminist bioethics and to determine the nature of this transformation. Behind the quest for transformation is core feminism and its political implications, namely, that women and other marginalized groups have been given unequal consideration in society and the sciences and that this situation is unacceptable and should be remedied. The goal of the dissertation is to determine how feminist bioethicists integrate the transformation into their respective fields and how they apply the potential of feminism to bioethical theories and practice. On a theoretical level, feminist bioethicists wish to reveal how current ways of knowing are based on inequality. Feminists pay special attention especially to communal and political contexts and to the power relations endorsed by each community. In addition, feminist bioethicists endorse relational ethics, a relational account of the self in which the interconnectedness of persons is important. On the conceptual level, feminist bioethicists work with beliefs, concepts, and practices that give us our world. As an example, I examine how feminist bioethicists have criticized and redefined the concept of autonomy. Feminist bioethicists emphasize relational autonomy, which is based on the conviction that social relationships shape moral identities and values. On the practical level, I discuss stem cell research as a test case for feminist bioethics and its ability to employ its methodologies. Analyzing these perspectives allowed me first, to compare non-feminist and feminist accounts of stem cell ethics and, second, to analyze feminist perspectives on the novel biotechnology. Along with offering a critical evaluation of the stem cell debate, the study shows that sustainable stem cell policies should be grounded on empirical knowledge about how donors perceive stem cell research and the donation process. The study indicates that feminist bioethics should develop the use of empirical bioethics, which takes the nature of ethics seriously: ethical decisions are provisional and open for further consideration. In addition, the study shows that there is another area of development in feminist bioethics: the understanding of (moral) agency. I argue that agency should be understood to mean that actions create desires.
Resumo:
We present a new computationally efficient method for large-scale polypeptide folding using coarse-grained elastic networks and gradient-based continuous optimization techniques. The folding is governed by minimization of energy based on Miyazawa–Jernigan contact potentials. Using this method we are able to substantially reduce the computation time on ordinary desktop computers for simulation of polypeptide folding starting from a fully unfolded state. We compare our results with available native state structures from Protein Data Bank (PDB) for a few de-novo proteins and two natural proteins, Ubiquitin and Lysozyme. Based on our simulations we are able to draw the energy landscape for a small de-novo protein, Chignolin. We also use two well known protein structure prediction software, MODELLER and GROMACS to compare our results. In the end, we show how a modification of normal elastic network model can lead to higher accuracy and lower time required for simulation.
Resumo:
The X-ray structure and electron density distribution of ethane-1,2-diol (ethylene glycol), obtained at a resolution extending to 1.00 Å−1 in sin θ/λ (data completion = 100% at 100 K) by in situ cryocrystallization technique is reported. The diol is in the gauche (g′Gt) conformation with the crystal structure stabilised by a network of inter-molecular hydrogen bonds. In addition to the well-recognized O–H···O hydrogen bonds there is topological evidence for C–H···O inter-molecular interactions. There is no experimental electron density based topological evidence for the occurrence of an intra-molecular hydrogen bond. The O···H spacing is not, vert, similar0.45 Å greater than in the gas-phase with an O–H···O angle close to 90°, calling into question the general assumption that the gauche conformation of ethane-1,2-diol is stabilised by the intra-molecular oxygen–hydrogen interaction.