924 resultados para Relativistic many-body perturbation theory
Resumo:
During the past two decades, chiral capillary electrophoresis (CE) emerged as a promising, effective and economic approach for the enantioselective determination of drugs and their metabolites in body fluids, tissues and in vitro preparations. This review discusses the principles and important aspects of CE-based chiral bioassays, provides a survey of the assays developed during the past 10 years and presents an overview of the key achievements encountered in that time period. Applications discussed encompass the pharmacokinetics of drug enantiomers in vivo and in vitro, the elucidation of the stereoselectivity of drug metabolism in vivo and in vitro, and bioanalysis of drug enantiomers of toxicological, forensic and doping interest. Chiral CE was extensively employed for research purposes to investigate the stereoselectivity associated with hydroxylation, dealkylation, carboxylation, sulfoxidation, N-oxidation and ketoreduction of drugs and metabolites. Enantioselective CE played a pivotal role in many biomedical studies, thereby providing new insights into the stereoselective metabolism of drugs in different species which might eventually lead to new strategies for optimization of pharmacotherapy in clinical practice.
Resumo:
Introduction: Advances in biotechnology have shed light on many biological processes. In biological networks, nodes are used to represent the function of individual entities within a system and have historically been studied in isolation. Network structure adds edges that enable communication between nodes. An emerging fieldis to combine node function and network structure to yield network function. One of the most complex networks known in biology is the neural network within the brain. Modeling neural function will require an understanding of networks, dynamics, andneurophysiology. It is with this work that modeling techniques will be developed to work at this complex intersection. Methods: Spatial game theory was developed by Nowak in the context of modeling evolutionary dynamics, or the way in which species evolve over time. Spatial game theory offers a two dimensional view of analyzingthe state of neighbors and updating based on the surroundings. Our work builds upon this foundation by studying evolutionary game theory networks with respect to neural networks. This novel concept is that neurons may adopt a particular strategy that will allow propagation of information. The strategy may therefore act as the mechanism for gating. Furthermore, the strategy of a neuron, as in a real brain, isimpacted by the strategy of its neighbors. The techniques of spatial game theory already established by Nowak are repeated to explain two basic cases and validate the implementation of code. Two novel modifications are introduced in Chapters 3 and 4 that build on this network and may reflect neural networks. Results: The introduction of two novel modifications, mutation and rewiring, in large parametricstudies resulted in dynamics that had an intermediate amount of nodes firing at any given time. Further, even small mutation rates result in different dynamics more representative of the ideal state hypothesized. Conclusions: In both modificationsto Nowak's model, the results demonstrate the network does not become locked into a particular global state of passing all information or blocking all information. It is hypothesized that normal brain function occurs within this intermediate range and that a number of diseases are the result of moving outside of this range.
Resumo:
This thesis investigates the boundaries between body and object in J.K. Rowling’s Harry Potter series, seven children’s literature novels published between 1997 and 2007. Lord Voldemort, Rowling’s villain, creates Horcruxes—objects that contain fragments of his soul—in order to ensure his immortality. As vessels for human soul, these objects rupture the boundaries between body and object and become “things.” Using contemporary thing theorists including John Plotz and materialists Jean Baudrillard and Walter Benjamin, I look at Voldemort’s Horcruxes as transgressive, liminal, unclassifiable entities in the first chapter. If objects can occupy the juncture between body and object, then bodies can as well. Dementors and Inferi, dark creatures that Rowling introduces throughout the series, live devoid of soul. Voldemort, too, becomes a thing as he splits his soul and creates Horcruxes. These soulless bodies are uncanny entities, provoking fear, revulsion, nausea, and the loss of language. In the second chapter, I use Sigmund Freud’s theorization of the uncanny as well as literary critic Kelly Hurley to investigate how Dementors, Inferi, and Voldemort exist as body-turned-object things at the juncture between life and death. As Voldemort increasingly invests his immaterial soul into material objects, he physically and spiritually degenerates, transforming from the young, handsome Tom Marvolo Riddle into the snake-like villain that murdered Harry’s parents and countless others. During his quest to find and destroy Voldemort’s Horcruxes, Harry encounters a different type of object, the Deathly Hallows. Although similarly accessing boundaries between body/object, life/death, and materiality/immateriality, the three Deathly Hallows do not transgress these boundaries. Through the Deathly Hallows, Rowling provides an alternative to thingification: objects that enable boundaries to fluctuate, but not breakdown. In the third chapter, I return to thing theorists, Baudrillard, and Benjamin to study how the Deathly Hallows resist thingification by not transgressing the boundaries between body and object.
Resumo:
Mr. Pechersky set out to examine a specific feature of the employer-employee relationship in Russian business organisations. He wanted to study to what extent the so-called "moral hazard" is being solved (if it is being solved at all), whether there is a relationship between pay and performance, and whether there is a correlation between economic theory and Russian reality. Finally, he set out to construct a model of the Russian economy that better reflects the way it actually functions than do certain other well-known models (for example models of incentive compensation, the Shapiro-Stiglitz model etc.). His report was presented to the RSS in the form of a series of manuscripts in English and Russian, and on disc, with many tables and graphs. He begins by pointing out the different examples of randomness that exist in the relationship between employee and employer. Firstly, results are frequently affected by circumstances outside the employee's control that have nothing to do with how intelligently, honestly, and diligently the employee has worked. When rewards are based on results, uncontrollable randomness in the employee's output induces randomness in their incomes. A second source of randomness involves the outside events that are beyond the control of the employee that may affect his or her ability to perform as contracted. A third source of randomness arises when the performance itself (rather than the result) is measured, and the performance evaluation procedures include random or subjective elements. Mr. Pechersky's study shows that in Russia the third source of randomness plays an important role. Moreover, he points out that employer-employee relationships in Russia are sometimes opposite to those in the West. Drawing on game theory, he characterises the Western system as follows. The two players are the principal and the agent, who are usually representative individuals. The principal hires an agent to perform a task, and the agent acquires an information advantage concerning his actions or the outside world at some point in the game, i.e. it is assumed that the employee is better informed. In Russia, on the other hand, incentive contracts are typically negotiated in situations in which the employer has the information advantage concerning outcome. Mr. Pechersky schematises it thus. Compensation (the wage) is W and consists of a base amount, plus a portion that varies with the outcome, x. So W = a + bx, where b is used to measure the intensity of the incentives provided to the employee. This means that one contract will be said to provide stronger incentives than another if it specifies a higher value for b. This is the incentive contract as it operates in the West. The key feature distinguishing the Russian example is that x is observed by the employer but is not observed by the employee. So the employer promises to pay in accordance with an incentive scheme, but since the outcome is not observable by the employee the contract cannot be enforced, and the question arises: is there any incentive for the employer to fulfil his or her promises? Mr. Pechersky considers two simple models of employer-employee relationships displaying the above type of information symmetry. In a static framework the obtained result is somewhat surprising: at the Nash equilibrium the employer pays nothing, even though his objective function contains a quadratic term reflecting negative consequences for the employer if the actual level of compensation deviates from the expectations of the employee. This can lead, for example, to labour turnover, or the expenses resulting from a bad reputation. In a dynamic framework, the conclusion can be formulated as follows: the higher the discount factor, the higher the incentive for the employer to be honest in his/her relationships with the employee. If the discount factor is taken to be a parameter reflecting the degree of (un)certainty (the higher the degree of uncertainty is, the lower is the discount factor), we can conclude that the answer to the formulated question depends on the stability of the political, social and economic situation in a country. Mr. Pechersky believes that the strength of a market system with private property lies not just in its providing the information needed to compute an efficient allocation of resources in an efficient manner. At least equally important is the manner in which it accepts individually self-interested behaviour, but then channels this behaviour in desired directions. People do not have to be cajoled, artificially induced, or forced to do their parts in a well-functioning market system. Instead, they are simply left to pursue their own objectives as they see fit. Under the right circumstances, people are led by Adam Smith's "invisible hand" of impersonal market forces to take the actions needed to achieve an efficient, co-ordinated pattern of choices. The problem is that, as Mr. Pechersky sees it, there is no reason to believe that the circumstances in Russia are right, and the invisible hand is doing its work properly. Political instability, social tension and other circumstances prevent it from doing so. Mr. Pechersky believes that the discount factor plays a crucial role in employer-employee relationships. Such relationships can be considered satisfactory from a normative point of view, only in those cases where the discount factor is sufficiently large. Unfortunately, in modern Russia the evidence points to the typical discount factor being relatively small. This fact can be explained as a manifestation of aversion to risk of economic agents. Mr. Pechersky hopes that when political stabilisation occurs, the discount factors of economic agents will increase, and the agent's behaviour will be explicable in terms of more traditional models.
Resumo:
Recently the issue of radiative corrections to leptogenesis has been raised. Considering the "strong washout" regime, in which OPE-techniques permit to streamline the setup, we report the thermal self-energy matrix of heavy right-handed neutrinos at NLO (resummed 2-loop level) in Standard Model couplings. The renormalized expression describes flavour transitions and "inclusive" decays of chemically decoupled right-handed neutrinos. Although CP-violation is not addressed, the result may find use in existing leptogenesis frameworks.
Resumo:
In many applications the observed data can be viewed as a censored high dimensional full data random variable X. By the curve of dimensionality it is typically not possible to construct estimators that are asymptotically efficient at every probability distribution in a semiparametric censored data model of such a high dimensional censored data structure. We provide a general method for construction of one-step estimators that are efficient at a chosen submodel of the full-data model, are still well behaved off this submodel and can be chosen to always improve on a given initial estimator. These one-step estimators rely on good estimators of the censoring mechanism and thus will require a parametric or semiparametric model for the censoring mechanism. We present a general theorem that provides a template for proving the desired asymptotic results. We illustrate the general one-step estimation methods by constructing locally efficient one-step estimators of marginal distributions and regression parameters with right-censored data, current status data and bivariate right-censored data, in all models allowing the presence of time-dependent covariates. The conditions of the asymptotics theorem are rigorously verified in one of the examples and the key condition of the general theorem is verified for all examples.
Resumo:
A body sensor network solution for personal healthcare under an indoor environment is developed. The system is capable of logging the physiological signals of human beings, tracking the orientations of human body, and monitoring the environmental attributes, which covers all necessary information for the personal healthcare in an indoor environment. The major three chapters of this dissertation contain three subsystems in this work, each corresponding to one subsystem: BioLogger, PAMS and CosNet. Each chapter covers the background and motivation of the subsystem, the related theory, the hardware/software design, and the evaluation of the prototype’s performance.
Resumo:
Students are now involved in a vastly different textual landscape than many English scholars, one that relies on the “reading” and interpretation of multiple channels of simultaneous information. As a response to these new kinds of literate practices, my dissertation adds to the growing body of research on multimodal literacies, narratology in new media, and rhetoric through an examination of the place of video games in English teaching and research. I describe in this dissertation a hybridized theoretical basis for incorporating video games in English classrooms. This framework for textual analysis includes elements from narrative theory in literary study, rhetorical theory, and literacy theory, and when combined to account for the multiple modalities and complexities of gaming, can provide new insights about those theories and practices across all kinds of media, whether in written texts, films, or video games. In creating this framework, I hope to encourage students to view texts from a meta-level perspective, encompassing textual construction, use, and interpretation. In order to foster meta-level learning in an English course, I use specific theoretical frameworks from the fields of literary studies, narratology, film theory, aural theory, reader-response criticism, game studies, and multiliteracies theory to analyze a particular video game: World of Goo. These theoretical frameworks inform pedagogical practices used in the classroom for textual analysis of multiple media. Examining a video game from these perspectives, I use analytical methods from each, including close reading, explication, textual analysis, and individual elements of multiliteracies theory and pedagogy. In undertaking an in-depth analysis of World of Goo, I demonstrate the possibilities for classroom instruction with a complex blend of theories and pedagogies in English courses. This blend of theories and practices is meant to foster literacy learning across media, helping students develop metaknowledge of their own literate practices in multiple modes. Finally, I outline a design for a multiliteracies course that would allow English scholars to use video games along with other texts to interrogate texts as systems of information. In doing so, students can hopefully view and transform systems in their own lives as audiences, citizens, and workers.
Resumo:
Nanoparticles are fascinating where physical and optical properties are related to size. Highly controllable synthesis methods and nanoparticle assembly are essential [6] for highly innovative technological applications. Among nanoparticles, nonhomogeneous core-shell nanoparticles (CSnp) have new properties that arise when varying the relative dimensions of the core and the shell. This CSnp structure enables various optical resonances, and engineered energy barriers, in addition to the high charge to surface ratio. Assembly of homogeneous nanoparticles into functional structures has become ubiquitous in biosensors (i.e. optical labeling) [7, 8], nanocoatings [9-13], and electrical circuits [14, 15]. Limited nonhomogenous nanoparticle assembly has only been explored. Many conventional nanoparticle assembly methods exist, but this work explores dielectrophoresis (DEP) as a new method. DEP is particle polarization via non-uniform electric fields while suspended in conductive fluids. Most prior DEP efforts involve microscale particles. Prior work on core-shell nanoparticle assemblies and separately, nanoparticle characterizations with dielectrophoresis and electrorotation [2-5], did not systematically explore particle size, dielectric properties (permittivity and electrical conductivity), shell thickness, particle concentration, medium conductivity, and frequency. This work is the first, to the best of our knowledge, to systematically examine these dielectrophoretic properties for core-shell nanoparticles. Further, we conduct a parametric fitting to traditional core-shell models. These biocompatible core-shell nanoparticles were studied to fill a knowledge gap in the DEP field. Experimental results (chapter 5) first examine medium conductivity, size and shell material dependencies of dielectrophoretic behaviors of spherical CSnp into 2D and 3D particle-assemblies. Chitosan (amino sugar) and poly-L-lysine (amino acid, PLL) CSnp shell materials were custom synthesized around a hollow (gas) core by utilizing a phospholipid micelle around a volatile fluid templating for the shell material; this approach proves to be novel and distinct from conventional core-shell models wherein a conductive core is coated with an insulative shell. Experiments were conducted within a 100 nl chamber housing 100 um wide Ti/Au quadrapole electrodes spaced 25 um apart. Frequencies from 100kHz to 80MHz at fixed local field of 5Vpp were tested with 10-5 and 10-3 S/m medium conductivities for 25 seconds. Dielectrophoretic responses of ~220 and 340(or ~400) nm chitosan or PLL CSnp were compiled as a function of medium conductivity, size and shell material.
Resumo:
Drought perturbation driven by the El Niño Southern Oscillation (ENSO) is a principal stochastic variable determining the dynamics of lowland rain forest in S.E. Asia. Mortality, recruitment and stem growth rates at Danum in Sabah (Malaysian Borneo) were recorded in two 4-ha plots (trees ≥ 10 cm gbh) for two periods, 1986–1996 and 1996–2001. Mortality and growth were also recorded in a sample of subplots for small trees (10 to <50 cm gbh) in two sub-periods, 1996–1999 and 1999–2001. Dynamics variables were employed to build indices of drought response for each of the 34 most abundant plot-level species (22 at the subplot level), these being interval-weighted percentage changes between periods and sub-periods. A significant yet complex effect of the strong 1997/1998 drought at the forest community level was shown by randomization procedures followed by multiple hypothesis testing. Despite a general resistance of the forest to drought, large and significant differences in short-term responses were apparent for several species. Using a diagrammatic form of stability analysis, different species showed immediate or lagged effects, high or low degrees of resilience or even oscillatory dynamics. In the context of the local topographic gradient, species’ responses define the newly termed perturbation response niche. The largest responses, particularly for recruitment and growth, were among the small trees, many of which are members of understorey taxa. The results bring with them a novel approach to understanding community dynamics: the kaleidoscopic complexity of idiosyncratic responses to stochastic perturbations suggests that plurality, rather than neutrality, of responses may be essential to understanding these tropical forests. The basis to the various responses lies with the mechanisms of tree-soil water relations which are physiologically predictable: the timing and intensity of the next drought, however, is not. To date, environmental stochasticity has been insufficiently incorporated into models of tropical forest dynamics, a step that might considerably improve the reality of theories about these globally important ecosystems.
Resumo:
Whole-body vibration exposure of locomotive engineers and the vibration attenuation of seats in 22 U.S. locomotives (built between 1959 and 2000) was studied during normal revenue service and following international measurement guidelines. Triaxial vibration measurements (duration mean 155 min, range 84-383 min) on the seat and on the floor were compared. In addition to the basic vibration evaluation (aw rms), the vector sum (av), the maximum transient vibration value (MTVV/aw), the vibration dose value (VDV/(aw T1/4)), and the vibration seat effective transmissibility factor (SEAT) were calculated. The power spectral densities are also reported. The mean basic vibration level (aw rms) was for the fore-aft axis x = 0.18 m/sec2, the lateral axis y = 0.28 m/sec2, and the vertical axis z = 0.32 m/sec2. The mean vector sum was 0.59 m/sec2 (range 0.27 to 1.44). The crest factors were generally at or above 9 in the horizontal and vertical axis. The mean MTVV/aw was 5.3 (x), 5.1 (y), and 4.8 (z), and the VDV/(aw T1/4) values ranged from 1.32 to 2.3 (x-axis), 1.33 to 1.7 (y-axis), and 1.38 to 1.86 (z-axis), generally indicating high levels of shocks. The mean seat transmissibility factor (SEAT) was 1.4 (x) and 1.2 (y) and 1 (z), demonstrating a general ineffectiveness of any of the seat suspension systems. In conclusion, these data indicate that locomotive rides are characterized by relatively high shock content (acceleration peaks) of the vibration signal in all directions. Locomotive vertical and lateral vibrations are similar, which appears to be characteristic for rail vehicles compared with many road/off-road vehicles. Tested locomotive cab seats currently in use (new or old) appear inadequate to reduce potentially harmful vibration and shocks transmitted to the seated operator, and older seats particularly lack basic ergonomic features regarding adjustability and postural support.
Resumo:
The relation between theory and practice in social work has always been controversial. Recently, many have underlined how language is crucial in order to capture how knowledge is used in practice. This article introduces a language perspective to the issue, rooted in the ‘strong programme’ in the sociology of knowledge and in Wittgenstein’s late work. According to this perspective, the meaning of categories and concepts corresponds to the use that concrete actors make of them as a result of on-going negotiation processes in specific contexts. Meanings may vary dramatically across social groups moved by different interests and holding different cultures. Accordingly, we may reformulate the issue of theory and practice in terms of the connections between different language games and power relationship between segments of the professional community. In this view, the point is anyway to look at how theoretical language relates to practitioners’ broader frames, and how it is transformed while providing words for making sense of experience.
Resumo:
The IDA model of cognition is a fully integrated artificial cognitive system reaching across the full spectrum of cognition, from low-level perception/action to high-level reasoning. Extensively based on empirical data, it accurately reflects the full range of cognitive processes found in natural cognitive systems. As a source of plausible explanations for very many cognitive processes, the IDA model provides an ideal tool to think with about how minds work. This online tutorial offers a reasonably full account of the IDA conceptual model, including background material. It also provides a high-level account of the underlying computational “mechanisms of mind” that constitute the IDA computational model.
Resumo:
A limited but accumulating body of research and theoretical commentary offers support for core claims of the “institutional-anomie theory” of crime (IAT) and points to areas needing further development. In this paper, which focuses on violent crime, we clarify the concept of social institutions, elaborate the cultural component of IAT, derive implications for individual behavior, summarize empirical applications, and propose directions for future research. Drawing on Talcott Parsons, we distinguish the “subjective” and “objective” dimensions of institutional dynamics and discuss their interrelationship. We elaborate on the theory’s cultural component with reference to Durkheim’s distinction between “moral” and “egoistic” individualism and propose that a version of the egoistic type characterizes societies in which the economy dominates the institutional structure, anomie is rampant, and levels of violent crime are high. We also offer a heuristic model of IAT that integrates macro- and individual levels of analysis. Finally, we discuss briefly issues for the further theoretical elaboration of this macro-social perspective on violent crime. Specifically, we call attention to the important tasks of explaining the emergence of economic dominance in the institutional balance of power and of formulating an institutional account for distinctive punishment practices, such as the advent of mass incarceration in the United States.