990 resultados para Cenozoic covers


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The driving force behind this study is the gap between the reality of the firms engaged in project business and the available studies covering project management and business process development. Previous studies show that project-based organizations were ‘immature’ in terms of the project-management ‘maturity model’, as few firms were found to be optimizing processes. Even within those, very little attention was paid to combine inter-organizational and intra-organizational perspectives. In this study an effort is made to elaborate some thoughts and views on project management, which interrelate firms’ external and internal activities. In line with the integration, the dissertation uses an approach to the management of project-business interdependencies in the networks of actors, activities and resources. Firstly, the study develops an understanding for inter-organizational perspectives by exploring the complementarities of process activities in the basic development of project business. It presents a framework that is elaborated on the basis of the reciprocal interactions of activities within and outside the organization—thus providing a coherent basis for continuous business-process improvement. In addition, the study presents new tools that can be used to develop project-business processes in each of its functional areas. The research demonstrates how project-business activities can be optimized using the right resources at the right time with the right actors and the right actions. The selected five articles included in this dissertation explain the basic framework for the development of project business. Each paper covers various aspects of inter-organizational and intra-organizational perspectives for project management. The study develops a valuable and procedural model for business-process improvement using the Delphi method that can be used not only in academia but also as a guide for practitioners that takes them through a series of well-defined steps when making informed, consistent and efficient changes to their business processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The occurrence of gestational diabetes (GDM) during pregnancy is a powerful sign of a risk of later type 2 diabetes (T2D) and cardiovascular diseases (CVDs). The physiological basis for this disease progression is not yet fully understood, but increasing evidence exists on interplay of insulin resistance, subclinical inflammation, and more recently, on unbalance of the autonomic nervous system. Since the delay in development of T2D and CVD after GDM ranges from years to decades, better understanding of the pathophysiology of GDM could give us new tools for primary prevention. The present study was aimed at investigating the role of the sympathetic nervous system (SNS) in GDM and its associations with insulin and a variety of inflammatory cytokines and coagulation and fibrinolysis markers. This thesis covers two separate study lines. Firstly, we investigated 41 women with GDM and 22 healthy pregnant and 14 non-pregnant controls during the night in hospital. Blood samples were drawn at 24:00, 4:00 and 7:00 h to determine the concentrations of plasma glucose, insulin, noradrenaline (NA) and adrenomedullin, markers of subclinical inflammation, coagulation and fibrinolysis variables and platelet function. Overnight holter ECG recording was performed for analysis of heart rate variability (HRV). Secondly, we studied 87 overweight hypertensive women with natural menopause. They were randomised to use a central sympatholytic agent, moxonidine (0.3mg twice daily), the β-blocking agent atenolol (50 mg once daily+blacebo once daily) for 8 weeks. Inflammatory markers and adiponectin were analysed at the beginning and after 8 weeks. Activation of the SNS (increase in NA, decreased HRV) was seen in pregnant vs. non-pregnant women, but no difference existed between GDM and normal pregnancy. However, modulation (internal rhythm) of HRV was attenuated in GDM. Insulin and inflammatory cytokine levels were comparable in all pregnant women but nocturnal variation of concentrations of C-reactive protein, serum amyloid A and insulin were reduced in GDM. Levels of coagulation factor VIII were lower in GDM compared with normal pregnancy, whereas no other differences were seen in coagulation and fibrinolysis markers. No significant associations were seen between NA and the studied parameters. In the study of postmenopausal women, moxonidine treatment was associated with favourable changes in the inflammatory profile, seen as a decrease in TNFα concentrations (increase in atenolol group) and preservation of adiponectin levels (decrease in atenolol group). In conclusion, our results did not support our hypotheses of increased SNS activity in GDM or a marked association between NA and inflammatory and coagulation markers. Reduced biological variation of HRV, insulin and inflammatory cytokines suggests disturbance of autonomic and hormonal regulatory mechanisms in GDM. This is a novel finding. Further understanding of the regulatory mechanisms could allow earlier detection of risk women and the possibility of prevention. In addition, our results support consideration of the SNS as one of the therapeutic targets in the battle against metabolic diseases, including T2D and CVD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study is divided into two parts: a methodological part and a part which focuses on the saving of households. In the 1950 s both the concepts as well as the household surveys themselves went through a rapid change. The development of national accounts was motivated by the Keynesian theory and the 1940 s and 1950 s were an important time for the development of the national accounts. Before this, saving was understood as cash money or money deposited in bank accounts but the changes in this era led to the establishment of the modern saving concept. Separate from the development of national accounts, household surveys were established. Household surveys have been conducted in Finland from the beginning of the 20th century. At that time surveys were conducted in order to observe the working class living standard and as a result, these were based on the tradition of welfare studies. Also a motivation for undertaking the studies was to estimate weights for the consumer price index. A final reason underpinning the government s interest in observing this data regarded whether there were any reasons for the working class to become radicalised and therefore adopt revolutionary ideas. As the need for the economic analysis increased and the data requirements underlying the political decision making process also expanded, the two traditions and thus, the two data sources started to integrate. In the 1950s the household surveys were compiled distinctly from the national accounts and they were virtually unaffected by economic theory. The 1966 survey was the first study that was clearly motivated by national accounts and saving analysis. This study also covered the whole population rather than it being limited to just part of it. It is essential to note that the integration of these two traditions is still continuing. This recently took a big step forward as the Stiglitz, Sen and Fitoussi Committee Report was introduced and thus, the criticism of the current measure of welfare was taken seriously. The Stiglitz report emphasises that the focus in the measurement of welfare should be on the households and the macro as well as micro perspective should be included in the analysis. In this study the national accounts are applied to the household survey data from the years 1950-51, 1955-56 and 1959-60. The first two studies cover the working population of towns and market towns and the last survey covers the population of rural areas. The analysis is performed at three levels: macro economic level, meso level, i.e. at the level of different types of households, and micro level, i.e. at the level of individual households. As a result it analyses how the different households saved and consumed and how that changed during the 1950 s.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ever since its initial introduction some fifty years ago, the rational expectations paradigm has dominated the way economic theory handles uncertainty. The main assertion made by John F. Muth (1961), seen by many as the father of the paradigm, is that expectations of rational economic agents should essentially be equal to the predictions of relevant economic theory, since rational agents should use information available to them in an optimal way. This assumption often has important consequences on the results and interpretations of the models where it is applied. Although the rational expectations assumption can be applied to virtually any economic theory, the focus in this thesis is on macroeconomic theories of consumption, especially the Rational Expectations–Permanent Income Hypothesis proposed by Robert E. Hall in 1978. The much-debated theory suggests that, assuming that agents have rational expectations on their future income, consumption decisions should follow a random walk, and the best forecast of future consumption level is the current consumption level. Then, changes in consumption are unforecastable. This thesis constructs an empirical test for the Rational Expectations–Permanent Income Hypothesis using Finnish Consumer Survey data as well as various Finnish macroeconomic data. The data sample covers the years 1995–2010. Consumer survey data may be interpreted to directly represent household expectations, which makes it an interesting tool for this particular test. The variable to be predicted is the growth of total household consumption expenditure. The main empirical result is that the Consumer Confidence Index (CCI), a balance figure computed from the most important consumer survey responses, does have statistically significant predictive power over the change in total consumption expenditure. The history of consumption expenditure growth itself, however, fails to predict its own future values. This indicates that the CCI contains some information that the history of consumption decisions does not, and that the consumption decisions are not optimal in the theoretical context. However, when conditioned on various macroeconomic variables, the CCI loses its predictive ability. This finding suggests that the index is merely a (partial) summary of macroeconomic information, and does not contain any significant private information on consumption intentions of households not directly deductible from the objective economic variables. In conclusion, the Rational Expectations–Permanent Income Hypothesis is strongly rejected by the empirical results in this thesis. This result is in accordance with most earlier studies conducted on the topic.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The concept of sustainable fashion covers not only the ecological and ethical matters in fashion and textile industries but also the cultural and social affairs, which are equally intertwined in this complex network. Sustainable fashion does not have one explicit or well-established definition; however, many researchers have discussed it from different perspectives. This study provides an overview of the principals, practices, possibilities, and challenges concerning sustainable fashion. It focuses particularly on the practical questions a designer faces. The aim of this study was to answer the following questions: What kind of outlooks and practices are included in sustainable fashion? How could the principles of sustainable fashion be integrated into designing and making clothes? The qualitative study was carried out by using the Grounded Theory method. Data consisted mainly of academic literature and communication with designers who practice sustainable fashion. In addition to these, several websites and journalistic articles were used. The data was analyzed by identifying and categorizing relevant concepts using the constant comparative method, i.e. examining the internal consistency of each category. The study established a core category, around which all other categories are integrated. The emerged concepts were organized into a model that pieces together different ideas about sustainable fashion, namely, when the principles of sustainable development are applied to fashion practices. The category named Considered Take and Return is the core of the model. It consists of various design philosophies that form the basis of design practice, and thus it relates to all other categories. It is framed by the category of Attachment and Appreciation, which reflects the importance of sentiment in design practice, for example the significance of aesthetics. The categories especially linked to fashion are Materials, Treatments of Fabrics and Production Methods. The categories closely connected with sustainable development are Saving Resources, Societal Implications, and Information Transparency. While the model depicts separate categories, the different segments are in close interaction. The objective of sustainable fashion is holistic and requires all of its sections to be taken into account.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Use of adverse drug combinations, abuse of medicinal drugs and substance abuse are considerable social problems that are difficult to study. Prescription database studies might fail to incorporate factors like use of over-the-counter drugs and patient compliance, and spontaneous reporting databases suffer from underreporting. Substance abuse and smoking studies might be impeded by poor participation activity and reliability. The Forensic Toxicology Unit at the University of Helsinki is the only laboratory in Finland that performs forensic toxicology related to cause-of-death investigations comprising the analysis of over 6000 medico-legal cases yearly. The analysis repertoire covers most commonly used drugs and drugs of abuse, and the ensuing database contains also background information and information extracted from the final death certificate. In this thesis, the data stored in this comprehensive post-mortem toxicology database was combined with additional metabolite and genotype analyses that were performed to complete the profile of selected cases. The incidence of drug combinations possessing serious adverse drug interactions was generally low (0.71%), but it was notable for the two individually studied drugs, a common anticoagulant warfarin (33%) and a new generation antidepressant venlafaxine (46%). Serotonin toxicity and adverse cardiovascular effects were the most prominent possible adverse outcomes. However, the specific role of the suspected adverse drug combinations was rarely recognized in the death certificates. The frequency of bleeds was observed to be elevated when paracetamol and warfarin were used concomitantly. Pharmacogenetic factors did not play a major role in fatalities related to venlafaxine, but the presence of interacting drugs was more common in cases showing high venlafaxine concentrations. Nicotine findings in deceased young adults were roughly three times more prevalent than the smoking frequency estimation of living population. Contrary to previous studies, no difference in the proportion of suicides was observed between nicotine users and non-nicotine users. However, findings of abused substances, including abused prescription drugs, were more common in the nicotine users group than in the non-nicotine users group. The results of the thesis are important for forensic and clinical medicine, as well as for public health. The possibility of drug interactions and pharmacogenetic issues should be taken into account in cause-of-death investigations, especially in unclear cases, medical malpractice suspicions and cases where toxicological findings are scarce. Post-mortem toxicological epidemiology is a new field of research that can help to reveal problems in drug use and prescription practises.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation examines aspects of asymmetrical warfare in the war-making of the German military entrepreneur Ernst von Mansfeld during his involvement in the Thirty Years War. Due to the nature of the inquiry, which combines history with military-political theory, the methodological approach of the dissertation is interdisciplinary. The theoretical framework used is that of asymmetrical warfare. The primary sources used in the dissertation are mostly political pamphlets and newsletters. Other sources include letters, documents, and contemporaneous chronicles. The secondary sources are divided into two categories, literature on the history of the Thirty Years War and textbooks covering the theory of asymmetrical warfare. The first category includes biographical works on Ernst von Mansfeld, as well as general histories of the Thirty Years War and seventeenth-century warfare. The second category combines military theory and political science. The structure of the dissertation consists of eight lead chapters, including an introduction and conclusion. The introduction covers the theoretical approach and aims of the dissertation, and provides a brief overlook of the sources and previous research on Ernst von Mansfeld and asymmetrical warfare in the Thirty Years War. The second chapter covers aspects of Mansfeld s asymmetrical warfare from the perspective of operational art. The third chapter investigates the illegal and immoral aspects of Mansfeld s war-making. The fourth chapter compares the differing methods by which Mansfeld and his enemies raised and financed their armies. The fifth chapter investigates Mansfeld s involvement in indirect warfare. The sixth chapter presents Mansfeld as an object and an agent of image and information war. The seventh chapter looks into the counter-reactions, which Mansfeld s asymmetrical warfare provoked from his enemies. The eighth chapter offers a conclusion of the findings. The dissertation argues that asymmetrical warfare presented itself in all the aforementioned areas of Mansfeld s conduct during the Thirty Years War. The operational asymmetry arose from the freedom of movement that Mansfeld enjoyed, while his enemies were constrained by the limits of positional warfare. As a non-state operator Mansfeld was also free to flout the rules of seventeenth-century warfare, which his enemies could not do with equal ease. The raising and financing of military forces was another source of asymmetry, because the nature of early seventeenth-century warfare favoured private military entrepreneurs rather than embryonic fiscal-military states. The dissertation also argues that other powers fought their own asymmetrical and indirect wars against the Habsburgs through Mansfeld s agency. Image and information were asymmetrical weapons, which were both aimed against Mansfeld and utilized by him. Finally, Mansfeld s asymmetrical threat forced the Habsburgs to adapt to his methods, which ultimately lead to the formation of a subcontracted Imperial Army under the management and leadership of Albrecht von Wallenstein. Therefore Mansfeld s asymmetrical warfare ultimately paved way for the kind of state-monopolized, organised, and symmetrical warfare that has prevailed from 1648 onwards. The conclusion is that Mansfeld s conduct in the Thirty Years War matched the criteria for asymmetrical warfare. While traditional historiography treated Mansfeld as an anomaly in the age of European state formation, his asymmetrical warfare has begun to bear resemblance to the contemporary conflicts, where nation states no longer hold the monopoly of violence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A first comprehensive investigation on the deflagration of ammonium perchlorate (AP) in the subcritical regime, below the low pressure deflagration limit (LPL, 2.03 MPa) christened as regime I$^{\prime}$, is discussed by using an elegant thermodynamic approach. In this regime, deflagration was effected by augmenting the initial temperature (T$_{0}$) of the AP strand and by adding fuels like aliphatic dicarboxylic acids or polymers like carboxy terminated polybutadiene (CTPB). From this thermodynamic model, considering the dependence of burning rate ($\dot{r}$) on pressure (P) and T$_{0}$, the true condensed (E$_{\text{s,c}}$) and gas phase (E$_{\text{s,g}}$) activation energies, just below and above the surface respectively, have been obtained and the data clearly distinguishes the deflagration mechanisms in regime I$^{\prime}$ and I (2.03-6.08 MPa). Substantial reduction in the E$_{\text{s,c}}$ of regime I$^{\prime}$, compared to that of regime I, is attributed to HClO$_{4}$ catalysed decomposition of AP. HClO$_{4}$ formation, which occurs only in regime I$^{\prime}$, promotes dent formation on the surface as revealed by the reflectance photomicrographs, in contrast to the smooth surface in regime I. The HClO$_{4}$ vapours, in regime I$^{\prime}$, also catalyse the gas phase reactions and thus bring down the E$_{\text{s,g}}$ too. The excess heat transferred on to the surface from the gas phase is used to melt AP and hence E$_{\text{s,c}}$, in regime I, corresponds to the melt AP decomposition. It is consistent with the similar variation observed for both the melt layer thickness and $\dot{r}$ as a function of P. Thermochemical calculations of the surface heat release support the thermodynamic model and reveal that the AP sublimation reduces the required critical exothermicity of 1108.8 kJ kg$^{-1}$ at the surface. It accounts for the AP not sustaining combustion in the subcritical regime I$^{\prime}$. Further support for the model comes from the temperature-time profiles of the combustion train of AP. The gas and condensed phase enthalpies, derived from the profile, give excellent agreement with those computed thermochemically. The $\sigma _{\text{p}}$ expressions derived from this model establish the mechanistic distinction of regime I$^{\prime}$ and I and thus lend support to the thermodynamic model. On comparing the deflagration of strand against powder AP, the proposed thermodynamic model correctly predicts that the total enthalpy of the condensed and gas phases remains unaltered. However, 16% of AP particles undergo buoyant lifting into the gas phase in the `free board region' (FBR) and this renders the demarcation of the true surface difficult. It is found that T$_{\text{s}}$ lies in the FBR and due to this, in regime I$^{\prime}$, the E$_{\text{s,c}}$ of powder AP matches with the E$_{\text{s,g}}$ of the pellet. The model was extended to AP/dicarboxylic acids and AP/CTPB mixture. The condensed ($\Delta $H$_{1}$) and gas phase ($\Delta $H$_{2}$) enthalpies were obtained from the temperature profile analyses which fit well with those computed thermochemically. The $\Delta $H$_{1}$ of the AP/succinic acid mixture was found just at the threshold of sustaining combustion. Indeed the lower homologue malonic acid, as predicted, does not sustain combustion. In vaporizable fuels like sebacic acid the E$_{\text{s,c}}$ in regime I$^{\prime}$, understandably, conforms to the AP decomposition. However, the E$_{\text{s,c}}$ in AP/CTPB system corresponds to the softening of the polymer which covers AP particles to promote extensive condensed phase reactions. The proposed thermodynamic model also satisfactorily explains certain unique features like intermittent, plateau and flameless combustion in AP/ polymeric fuel systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The blood-brain barrier (BBB) is a unique barrier that strictly regulates the entry of endogenous substrates and xenobiotics into the brain. This is due to its tight junctions and the array of transporters and metabolic enzymes that are expressed. The determination of brain concentrations in vivo is difficult, laborious and expensive which means that there is interest in developing predictive tools of brain distribution. Predicting brain concentrations is important even in early drug development to ensure efficacy of central nervous system (CNS) targeted drugs and safety of non-CNS drugs. The literature review covers the most common current in vitro, in vivo and in silico methods of studying transport into the brain, concentrating on transporter effects. The consequences of efflux mediated by p-glycoprotein, the most widely characterized transporter expressed at the BBB, is also discussed. The aim of the experimental study was to build a pharmacokinetic (PK) model to describe p-glycoprotein substrate drug concentrations in the brain using commonly measured in vivo parameters of brain distribution. The possibility of replacing in vivo parameter values with their in vitro counterparts was also studied. All data for the study was taken from the literature. A simple 2-compartment PK model was built using the Stella™ software. Brain concentrations of morphine, loperamide and quinidine were simulated and compared with published studies. Correlation of in vitro measured efflux ratio (ER) from different studies was evaluated in addition to studying correlation between in vitro and in vivo measured ER. A Stella™ model was also constructed to simulate an in vitro transcellular monolayer experiment, to study the sensitivity of measured ER to changes in passive permeability and Michaelis-Menten kinetic parameter values. Interspecies differences in rats and mice were investigated with regards to brain permeability and drug binding in brain tissue. Although the PK brain model was able to capture the concentration-time profiles for all 3 compounds in both brain and plasma and performed fairly well for morphine, for quinidine it underestimated and for loperamide it overestimated brain concentrations. Because the ratio of concentrations in brain and blood is dependent on the ER, it is suggested that the variable values cited for this parameter and its inaccuracy could be one explanation for the failure of predictions. Validation of the model with more compounds is needed to draw further conclusions. In vitro ER showed variable correlation between studies, indicating variability due to experimental factors such as test concentration, but overall differences were small. Good correlation between in vitro and in vivo ER at low concentrations supports the possibility of using of in vitro ER in the PK model. The in vitro simulation illustrated that in the simulation setting, efflux is significant only with low passive permeability, which highlights the fact that the cell model used to measure ER must have low enough paracellular permeability to correctly mimic the in vivo situation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis explores selective migration in Greater Helsinki region from the perspective of counterurbanisation. The aim of the study is to research whether the migration is selective by migrants age, education, income level or the rate of employment and to study any regional patterns formed by the selectivity. In the Helsinki region recent migratory developments have been shifting the areas of net migration gain away from the city of Helsinki to municipalities farther off on the former countryside. There has been discussion about Helsinki s decaying tax revenue base and whether the city s housing policy has contributed to the exodus of wealthier households. The central question of the discussion is one of selective migration: which municipalities succeed in capturing the most favourable migrants and which will lose in the competition. Selective migration means that region s in-migrants and out-migrants significantly differ from each other demographically, socially and economically. Sometimes selectivity is also understood as some individuals greater propensity to migrate than others but the proper notion for this would be differential migration. In Finnish parlance these two concepts have tended to get mixed up. The data of the study covers the total migration of the 34 municipalities of Uusimaa provinces during the years 2001 to 2003. The data was produced by Statistics Finland. Two new methods of representing the selectivity of migration as a whole were constructed during the study. Both methods look at the proportions of favourably selected migrants in regions inward and outward migrant flow. A large share in the inward flow and a small share in the outward flow is good for region s economy and demography. The first method calculates the differences of the proportions of favourably selected four migrant groups and sums the differences up. The other ranks the same proportions between regions giving value 1 to the largest proportion in inward flow and 34 to the smallest, and respectively in outward flow the smallest proportion gets value 1 and the largest 34. The total sum of the ranks or differences in proportions represents region s selectivity of migration. The results show that migration is indeed selective in the Greater Helsinki region. There also seems to be a spatial pattern centred around the Helsinki metropolitan region. The municipalities surrounding the four central communes are generally better of than those farther away. Not only these eight municipalities of the so called capital region benefit from the selective migration, but the favourable structure of migration extends to some of the small municipalities farther away. Some municipalities situated along the main northbound railway line are not coming through as well as other municipalities of the capital region. The selectivity of migration in Greater Helsinki region shows signs of counter-urbanisation. People look for suburban or small-town lifestyle no longer from Espoo or Vantaa, the neighbouring municipalities to Helsinki, but from the municipalities surrounding these two or even farther off. This kind of pattern in selective migration leads to unbalanced development in population structure and tax revenue base in the region. Migration to outskirts of the urban area also leads to urban sprawl and fragmentation of the urban structure: these issues have ecological implications. Selective migration should be studied more. Also the concept itself needs clearer definition and so do the methods to study the selectivity of migration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Short elliptical chamber mufflers are used often in the modern day automotive exhaust systems. The acoustic analysis of such short chamber mufflers is facilitated by considering a transverse plane wave propagation model along the major axis up to the low frequency limit. The one dimensional differential equation governing the transverse plane wave propagation in such short chambers is solved using the segmentation approaches which are inherently numerical schemes, wherein the transfer matrix relating the upstream state variables to the downstream variables is obtained. Analytical solution of the transverse plane wave model used to analyze such short chambers has not been reported in the literature so far. This present work is thus an attempt to fill up this lacuna, whereby Frobenius solution of the differential equation governing the transverse plane wave propagation is obtained. By taking a sufficient number of terms of the infinite series, an approximate analytical solution so obtained shows good convergence up to about 1300 Hz and also covers most of the range of muffler dimensions used in practice. The transmission loss (TL) performance of the muffler configurations computed by this analytical approach agrees excellently with that computed by the Matrizant approach used earlier by the authors, thereby offering a faster and more elegant alternate method to analyze short elliptical muffler configurations. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Indian logic has a long history. It somewhat covers the domains of two of the six schools (darsanas) of Indian philosophy, namely, Nyaya and Vaisesika. The generally accepted definition of Indian logic over the ages is the science which ascertains valid knowledge either by means of six senses or by means of the five members of the syllogism. In other words, perception and inference constitute the subject matter of logic. The science of logic evolved in India through three ages: the ancient, the medieval and the modern, spanning almost thirty centuries. Advances in Computer Science, in particular, in Artificial Intelligence have got researchers in these areas interested in the basic problems of language, logic and cognition in the past three decades. In the 1980s, Artificial Intelligence has evolved into knowledge-based and intelligent system design, and the knowledge base and inference engine have become standard subsystems of an intelligent system. One of the important issues in the design of such systems is knowledge acquisition from humans who are experts in a branch of learning (such as medicine or law) and transferring that knowledge to a computing system. The second important issue in such systems is the validation of the knowledge base of the system i.e. ensuring that the knowledge is complete and consistent. It is in this context that comparative study of Indian logic with recent theories of logic, language and knowledge engineering will help the computer scientist understand the deeper implications of the terms and concepts he is currently using and attempting to develop.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The unique features of a macromolecule and water as a solvent make the issue of solvation unconventional, with questions about the static versus dynamic nature of hydration and the, physics of orientational and translational diffusion at the boundary. For proteins, the hydration shell that covers the surface is critical to the stability of its structure and function. Dynamically speaking, the residence time of water at the surface is a signature of its mobility and binding. With femtosecond time resolution it is possible to unravel the shortest residence times which are key for the description of the hydration layer, static or dynamic. In this article we review these issues guided by experimental studies, from this laboratory, of polar hydration dynamics at the surfaces of two proteins (Subtilisin Carlsberg (SC) and Monellin). The natural probe tryptophan amino acid was used for the interrogation of the dynamics, and for direct comparison we also studied the behavior in bulk water - a complete hydration in 1 ps. We develop a theoretical description of solvation and relate the theory to the experimental observations. In this - theoretical approach, we consider the dynamical equilibrium in the hydration shell, defining the rate processes for breaking and making the transient hydrogen bonds, and the effective friction in the layer which is defined by the translational and orientational motions of water molecules. The relationship between the residence time of water molecules and the observed slow component in solvation dynamics is a direct one. For the two proteins studied, we observed a "bimodal decay" for the hydration correlation function, with two primary relaxation times: ultrafast, typically 1 ps or less, and longer, typically 15-40 ps, and both are related to the residence time at the protein surface, depending on the binding energies. We end by making extensions to studies of the denatured state of the protein, random coils, and the biomimetic micelles, and conclude with our thoughts on the relevance of the dynamics of native structures to their functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The soft clay of Ariake Bay, in western Kyushu, Japan covers several hundred square kilometers. Ariake clay consists of the principal clay minerals namely, smectite, illite, kaolinite and vermiculite, and other minerals in lesser quantity. The percentage of the principal clay, mineral can vary significantly. The percent clay, size fraction and the salt concentration can also vary significantly. In view of the importance of undrained shear strength in geotechnical engineering practice, its behavior has been studied with respect to variation in salt concentration. Basically, two mechanisms control the undrained strength in clays, namely (a) cohesion or undrained strength is due to the net interparticle attractive forces, or (b) cohesion is due to the viscous nature of the double layer water. Concept (a) operates primarily for kaolinitic soil, and concept (b) dominates primarily for montmorillonitic soils. In Ariake clay, different clay minerals with different exchangeable cations and varying ion concentration in the pore water and varying nonclay size fraction are present. In view of this while both concepts (a) and (b) can coexist and operate simultaneously, one of the mechanisms dominates. For Isahaya clay, concept (a), factors responsible for an increase in level of flocculation and attractive forces result in higher undrained strength. Increase in salt concentration increases the remolded undrained strength at any moisture content. For Kubota and Kawazoe clays, concept (b) factors responsible for an expansion of diffuse double layer thickness, resulting in higher viscous resistance, increase the undrained shear strength, that is, as concentration decreases, the undrained strength increases at any moisture content.The liquid limit of Isahaya,a clay increases with increase in ion concentration and a marginal decrease is seen for both Kubota and Kawazoe clays, and their behavior has been explained satisfactorily,.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Over the last few decades, there has been a significant land cover (LC) change across the globe due to the increasing demand of the burgeoning population and urban sprawl. In order to take account of the change, there is a need for accurate and up- to-date LC maps. Mapping and monitoring of LC in India is being carried out at national level using multi-temporal IRS AWiFS data. Multispectral data such as IKONOS, Landsat- TM/ETM+, IRS-1C/D LISS-III/IV, AWiFS and SPOT-5, etc. have adequate spatial resolution (~ 1m to 56m) for LC mapping to generate 1:50,000 maps. However, for developing countries and those with large geographical extent, seasonal LC mapping is prohibitive with data from commercial sensors of limited spatial coverage. Superspectral data from the MODIS sensor are freely available, have better temporal (8 day composites) and spectral information. MODIS pixels typically contain a mixture of various LC types (due to coarse spatial resolution of 250, 500 and 1000 m), especially in more fragmented landscapes. In this context, linear spectral unmixing would be useful for mapping patchy land covers, such as those that characterise much of the Indian subcontinent. This work evaluates the existing unmixing technique for LC mapping using MODIS data, using end- members that are extracted through Pixel Purity Index (PPI), Scatter plot and N-dimensional visualisation. The abundance maps were generated for agriculture, built up, forest, plantations, waste land/others and water bodies. The assessment of the results using ground truth and a LISS-III classified map shows 86% overall accuracy, suggesting the potential for broad-scale applicability of the technique with superspectral data for natural resource planning and inventory applications.