933 resultados para Set of Weak Stationary Dynamic Actions
Resumo:
In power electronic basedmicrogrids, the computational requirements needed to implement an optimized online control strategy can be prohibitive. The work presented in this dissertation proposes a generalized method of derivation of geometric manifolds in a dc microgrid that is based on the a-priori computation of the optimal reactions and trajectories for classes of events in a dc microgrid. The proposed states are the stored energies in all the energy storage elements of the dc microgrid and power flowing into them. It is anticipated that calculating a large enough set of dissimilar transient scenarios will also span many scenarios not specifically used to develop the surface. These geometric manifolds will then be used as reference surfaces in any type of controller, such as a sliding mode hysteretic controller. The presence of switched power converters in microgrids involve different control actions for different system events. The control of the switch states of the converters is essential for steady state and transient operations. A digital memory look-up based controller that uses a hysteretic sliding mode control strategy is an effective technique to generate the proper switch states for the converters. An example dcmicrogrid with three dc-dc boost converters and resistive loads is considered for this work. The geometric manifolds are successfully generated for transient events, such as step changes in the loads and the sources. The surfaces corresponding to a specific case of step change in the loads are then used as reference surfaces in an EEPROM for experimentally validating the control strategy. The required switch states corresponding to this specific transient scenario are programmed in the EEPROM as a memory table. This controls the switching of the dc-dc boost converters and drives the system states to the reference manifold. In this work, it is shown that this strategy effectively controls the system for a transient condition such as step changes in the loads for the example case.
Substrate binding tunes conformational flexibility and kinetic stability of an amino acid antiporter
Resumo:
We used single molecule dynamic force spectroscopy to unfold individual serine/threonine antiporters SteT from Bacillus subtilis. The unfolding force patterns revealed interactions and energy barriers that stabilized structural segments of SteT. Substrate binding did not establish strong localized interactions but appeared to be facilitated by the formation of weak interactions with several structural segments. Upon substrate binding, all energy barriers of the antiporter changed thereby describing the transition from brittle mechanical properties of SteT in the unbound state to structurally flexible conformations in the substrate-bound state. The lifetime of the unbound state was much shorter than that of the substrate-bound state. This leads to the conclusion that the unbound state of SteT shows a reduced conformational flexibility to facilitate specific substrate binding and a reduced kinetic stability to enable rapid switching to the bound state. In contrast, the bound state of SteT showed an increased conformational flexibility and kinetic stability such as required to enable transport of substrate across the cell membrane. This result supports the working model of antiporters in which alternate substrate access from one to the other membrane surface occurs in the substrate-bound state.
Resumo:
We construct holomorphic families of proper holomorphic embeddings of \mathbb {C}^{k} into \mathbb {C}^{n} (0\textless k\textless n-1), so that for any two different parameters in the family, no holomorphic automorphism of \mathbb {C}^{n} can map the image of the corresponding two embeddings onto each other. As an application to the study of the group of holomorphic automorphisms of \mathbb {C}^{n}, we derive the existence of families of holomorphic \mathbb {C}^{*}-actions on \mathbb {C}^{n} (n\ge5) so that different actions in the family are not conjugate. This result is surprising in view of the long-standing holomorphic linearization problem, which, in particular, asked whether there would be more than one conjugacy class of \mathbb {C}^{*}-actions on \mathbb {C}^{n} (with prescribed linear part at a fixed point).
Resumo:
Hydrodynamics can be consistently formulated on surfaces of arbitrary co-dimension in a background space-time, providing the effective theory describing long-wavelength perturbations of black branes. When the co-dimension is non-zero, the system acquires fluid-elastic properties and constitutes what is called a fluid brane. Applying an effective action approach, the most general form of the free energy quadratic in the extrinsic curvature and extrinsic twist potential of stationary fluid brane configurations is constructed to second order in a derivative expansion. This construction generalizes the Helfrich-Canham bending energy for fluid membranes studied in theoretical biology to the case in which the fluid is rotating. It is found that stationary fluid brane configurations are characterized by a set of 3 elastic response coefficients, 3 hydrodynamic response coefficients and 1 spin response coefficient for co-dimension greater than one. Moreover, the elastic degrees of freedom present in the system are coupled to the hydrodynamic degrees of freedom. For co-dimension-1 surfaces we find a 8 independent parameter family of stationary fluid branes. It is further shown that elastic and spin corrections to (non)-extremal brane effective actions can be accounted for by a multipole expansion of the stress-energy tensor, therefore establishing a relation between the different formalisms of Carter, Capovilla-Guven and Vasilic-Vojinovic and between gravity and the effective description of stationary fluid branes. Finally, it is shown that the Young modulus found in the literature for black branes falls into the class predicted by this approach - a relation which is then used to make a proposal for the second order effective action of stationary blackfolds and to find the corrected horizon angular velocity of thin black rings.
Resumo:
PURPOSE Segmentation of the proximal femur in digital antero-posterior (AP) pelvic radiographs is required to create a three-dimensional model of the hip joint for use in planning and treatment. However, manually extracting the femoral contour is tedious and prone to subjective bias, while automatic segmentation must accommodate poor image quality, anatomical structure overlap, and femur deformity. A new method was developed for femur segmentation in AP pelvic radiographs. METHODS Using manual annotations on 100 AP pelvic radiographs, a statistical shape model (SSM) and a statistical appearance model (SAM) of the femur contour were constructed. The SSM and SAM were used to segment new AP pelvic radiographs with a three-stage approach. At initialization, the mean SSM model is coarsely registered to the femur in the AP radiograph through a scaled rigid registration. Mahalanobis distance defined on the SAM is employed as the search criteria for each annotated suggested landmark location. Dynamic programming was used to eliminate ambiguities. After all landmarks are assigned, a regularized non-rigid registration method deforms the current mean shape of SSM to produce a new segmentation of proximal femur. The second and third stages are iteratively executed to convergence. RESULTS A set of 100 clinical AP pelvic radiographs (not used for training) were evaluated. The mean segmentation error was [Formula: see text], requiring [Formula: see text] s per case when implemented with Matlab. The influence of the initialization on segmentation results was tested by six clinicians, demonstrating no significance difference. CONCLUSIONS A fast, robust and accurate method for femur segmentation in digital AP pelvic radiographs was developed by combining SSM and SAM with dynamic programming. This method can be extended to segmentation of other bony structures such as the pelvis.
Resumo:
The development of northern high-latitude peatlands played an important role in the carbon (C) balance of the land biosphere since the Last Glacial Maximum (LGM). At present, carbon storage in northern peatlands is substantial and estimated to be 500 ± 100 Pg C (1 Pg C = 1015 g C). Here, we develop and apply a peatland module embedded in a dynamic global vegetation and land surface process model (LPX-Bern 1.0). The peatland module features a dynamic nitrogen cycle, a dynamic C transfer between peatland acrotelm (upper oxic layer) and catotelm (deep anoxic layer), hydrology- and temperature-dependent respiration rates, and peatland specific plant functional types. Nitrogen limitation down-regulates average modern net primary productivity over peatlands by about half. Decadal acrotelm-to-catotelm C fluxes vary between −20 and +50 g C m−2 yr−1 over the Holocene. Key model parameters are calibrated with reconstructed peat accumulation rates from peat-core data. The model reproduces the major features of the peat core data and of the observation-based modern circumpolar soil carbon distribution. Results from a set of simulations for possible evolutions of northern peat development and areal extent show that soil C stocks in modern peatlands increased by 365–550 Pg C since the LGM, of which 175–272 Pg C accumulated between 11 and 5 kyr BP. Furthermore, our simulations suggest a persistent C sequestration rate of 35–50 Pg C per 1000 yr in present-day peatlands under current climate conditions, and that this C sink could either sustain or turn towards a source by 2100 AD depending on climate trajectories as projected for different representative greenhouse gas concentration pathways.
Resumo:
Most statistical analysis, theory and practice, is concerned with static models; models with a proposed set of parameters whose values are fixed across observational units. Static models implicitly assume that the quantified relationships remain the same across the design space of the data. While this is reasonable under many circumstances this can be a dangerous assumption when dealing with sequentially ordered data. The mere passage of time always brings fresh considerations and the interrelationships among parameters, or subsets of parameters, may need to be continually revised. ^ When data are gathered sequentially dynamic interim monitoring may be useful as new subject-specific parameters are introduced with each new observational unit. Sequential imputation via dynamic hierarchical models is an efficient strategy for handling missing data and analyzing longitudinal studies. Dynamic conditional independence models offers a flexible framework that exploits the Bayesian updating scheme for capturing the evolution of both the population and individual effects over time. While static models often describe aggregate information well they often do not reflect conflicts in the information at the individual level. Dynamic models prove advantageous over static models in capturing both individual and aggregate trends. Computations for such models can be carried out via the Gibbs sampler. An application using a small sample repeated measures normally distributed growth curve data is presented. ^
Resumo:
Identifying drivers of species diversity is a major challenge in understanding and predicting the dynamics of species-rich semi-natural grasslands. In particular in temperate grasslands changes in land use and its consequences, i.e. increasing fragmentation, the on-going loss of habitat and the declining importance of regional processes such as seed dispersal by livestock, are considered key drivers of the diversity loss witnessed within the last decades. It is a largely unresolved question to what degree current temperate grassland communities already reflect a decline of regional processes such as longer distance seed dispersal. Answering this question is challenging since it requires both a mechanistic approach to community dynamics and a sufficient data basis that allows identifying general patterns. Here, we present results of a local individual- and trait-based community model that was initialized with plant functional types (PFTs) derived from an extensive empirical data set of species-rich grasslands within the `Biodiversity Exploratories' in Germany. Driving model processes included above- and belowground competition, dynamic resource allocation to shoots and roots, clonal growth, grazing, and local seed dispersal. To test for the impact of regional processes we also simulated seed input from a regional species pool. Model output, with and without regional seed input, was compared with empirical community response patterns along a grazing gradient. Simulated response patterns of changes in PFT richness, Shannon diversity, and biomass production matched observed grazing response patterns surprisingly well if only local processes were considered. Already low levels of additional regional seed input led to stronger deviations from empirical community pattern. While these findings cannot rule out that regional processes other than those considered in the modeling study potentially play a role in shaping the local grassland communities, our comparison indicates that European grasslands are largely isolated, i.e. local mechanisms explain observed community patterns to a large extent.
Resumo:
A variety of lattice discretisations of continuum actions has been considered, usually requiring the correct classical continuum limit. Here we discuss “weird” lattice formulations without that property, namely lattice actions that are invariant under most continuous deformations of the field configuration, in one version even without any coupling constants. It turns out that universality is powerful enough to still provide the correct quantum continuum limit, despite the absence of a classical limit, or a perturbative expansion. We demonstrate this for a set of O(N) models (or non-linear σ-models). Amazingly, such “weird” lattice actions are not only in the right universality class, but some of them even have practical benefits, in particular an excellent scaling behaviour.
Resumo:
One significant challenge for the operationalization of water justice arises from the many dynamic scales involved. In this paper we explore the scalar dimension of justice in water governance through the insights derived from empirical research on hydropower production in the Swiss Alps and the application of the geographical concept of politics of scale. More specifically, we investigate how different actors frame the justice problem, the scales that they invoke and which actors consequently get included or excluded in their justice assessments. This study shows that there is no ideal scale for justice evaluations; whichever scale is used, some actors and justice claims are included whereas others are excluded. This is particularly true when using Fraser’s trivalent concept of justice, taking into account issues of distribution, recognition and participation where each calls for its own set of scales. Moreover, focusing on the politics of scale framing, our study reveals that the justice claim itself can become a power element. Consequently, to achieve more just water governance, there is not only a need for debate and negotiations about the conceptions and meanings of justice in a specific context, there is also a need for debate about the relevance and implications of divergent scales involved in justice claims.
Resumo:
OBJECTIVE In contrast to conventional breast imaging techniques, one major diagnostic benefit of breast magnetic resonance imaging (MRI) is the simultaneous acquisition of morphologic and dynamic enhancement characteristics, which are based on angiogenesis and therefore provide insights into tumor pathophysiology. The aim of this investigation was to intraindividually compare 2 macrocyclic MRI contrast agents, with low risk for nephrogenic systemic fibrosis, in the morphologic and dynamic characterization of histologically verified mass breast lesions, analyzed by blinded human evaluation and a fully automatic computer-assisted diagnosis (CAD) technique. MATERIALS AND METHODS Institutional review board approval and patient informed consent were obtained. In this prospective, single-center study, 45 women with 51 histopathologically verified (41 malignant, 10 benign) mass lesions underwent 2 identical examinations at 1.5 T (mean time interval, 2.1 days) with 0.1-mmol kg doses of gadoteric acid and gadobutrol. All magnetic resonance images were visually evaluated by 2 experienced, blinded breast radiologists in consensus and by an automatic CAD system, whereas the morphologic and dynamic characterization as well as the final human classification of lesions were performed based on the categories of the Breast imaging reporting and data system MRI atlas. Lesions were also classified by defining their probability of malignancy (morpho-dynamic index; 0%-100%) by the CAD system. Imaging results were correlated with histopathology as gold standard. RESULTS The CAD system coded 49 of 51 lesions with gadoteric acid and gadobutrol (detection rate, 96.1%); initial signal increase was significantly higher for gadobutrol than for gadoteric acid for all and the malignant coded lesions (P < 0.05). Gadoteric acid resulted in more postinitial washout curves and fewer continuous increases of all and the malignant lesions compared with gadobutrol (CAD hot spot regions, P < 0.05). Morphologically, the margins of the malignancies were different between the 2 agents, whereas gadobutrol demonstrated more spiculated and fewer smooth margins (P < 0.05). Lesion classifications by the human observers and by the morpho-dynamic index compared with the histopathologic results did not significantly differ between gadoteric acid and gadobutrol. CONCLUSIONS Macrocyclic contrast media can be reliably used for breast dynamic contrast-enhanced MRI. However, gadoteric acid and gadobutrol differed in some dynamic and morphologic characterization of histologically verified breast lesions in an intraindividual, comparison. Besides the standardization of technical parameters and imaging evaluation of breast MRI, the standardization of the applied contrast medium seems to be important to receive best comparable MRI interpretation.
Resumo:
Temperature changes in Antarctica over the last millennium are investigated using proxy records, a set of simulations driven by natural and anthropogenic forcings and one simulation with data assimilation. Over Antarctica, a long term cooling trend in annual mean is simulated during the period 1000–1850. The main contributor to this cooling trend is the volcanic forcing, astronomical forcing playing a dominant role at seasonal timescale. Since 1850, all the models produce an Antarctic warming in response to the increase in greenhouse gas concentrations. We present a composite of Antarctic temperature, calculated by averaging seven temperature records derived from isotope measurements in ice cores. This simple approach is supported by the coherency displayed between model results at these data grid points and Antarctic mean temperature. The composite shows a weak multi-centennial cooling trend during the pre-industrial period and a warming after 1850 that is broadly consistent with model results. In both data and simulations, large regional variations are superimposed on this common signal, at decadal to centennial timescales. The model results appear spatially more consistent than ice core records. We conclude that more records are needed to resolve the complex spatial distribution of Antarctic temperature variations during the last millennium.
Resumo:
Land degradation is intrinsically complex and involves decisions by many agencies and individuals, land degradation map- ping should be used as a learning tool through which managers, experts and stakeholders can re-examine their views within a wider semantic context. In this paper, we introduce an analytical framework for mapping land degradation, developed by World Overview for Conservation Approaches and technologies (WOCAT) programs, which aims to develop some thematic maps that serve as an useful tool and including effective information on land degradation and conservation status. Consequently, this methodology would provide an important background for decision-making in order to launch rehabilitation/remediation actions in high-priority intervention areas. As land degradation mapping is a problem-solving task that aims to provide clear information, this study entails the implementation of WOCAT mapping tool, which integrate a set of indicators to appraise the severity of land degradation across a representative watershed. So this work focuses on the use of the most relevant indicators for measuring impacts of different degradation processes in El Mkhachbiya catchment, situated in Northwest of Tunisia and those actions taken to deal with them based on the analysis of operating modes and issues of degradation in different land use systems. This study aims to provide a database for surveillance and monitoring of land degradation, in order to support stakeholders in making appropriate choices and judge guidelines and possible suitable recommendations to remedy the situation in order to promote sustainable development. The approach is illustrated through a case study of an urban watershed in Northwest of Tunisia. Results showed that the main land degradation drivers in the study area were related to natural processes, which were exacerbated by human activities. So the output of this analytical framework enabled a better communication of land degradation issues and concerns in a way relevant for policymakers.
Resumo:
Trabecular bone plays an important mechanical role in bone fractures and implant stability. Homogenized nonlinear finite element (FE) analysis of whole bones can deliver improved fracture risk and implant loosening assessment. Such simulations require the knowledge of mechanical properties such as an appropriate yield behavior and criterion for trabecular bone. Identification of a complete yield surface is extremely difficult experimentally but can be achieved in silico by using micro-FE analysis on cubical trabecular volume elements. Nevertheless, the influence of the boundary conditions (BCs), which are applied to such volume elements, on the obtained yield properties remains unknown. Therefore, this study compared homogenized yield properties along 17 load cases of 126 human femoral trabecular cubic specimens computed with classical kinematic uniform BCs (KUBCs) and a new set of mixed uniform BCs, namely periodicity-compatible mixed uniform BCs (PMUBCs). In stress space, PMUBCs lead to 7–72 % lower yield stresses compared to KUBCs. The yield surfaces obtained with both KUBCs and PMUBCs demonstrate a pressure-sensitive ellipsoidal shape. A volume fraction and fabric-based quadric yield function successfully fitted the yield surfaces of both BCs with a correlation coefficient R2≥0.93. As expected, yield strains show only a weak dependency on bone volume fraction and fabric. The role of the two BCs in homogenized FE analysis of whole bones will need to be investigated and validated with experimental results at the whole bone level in future studies.
Resumo:
Bargaining is the building block of many economic interactions, ranging from bilateral to multilateral encounters and from situations in which the actors are individuals to negotiations between firms or countries. In all these settings, economists have been intrigued for a long time by the fact that some projects, trades or agreements are not realized even though they are mutually beneficial. On the one hand, this has been explained by incomplete information. A firm may not be willing to offer a wage that is acceptable to a qualified worker, because it knows that there are also unqualified workers and cannot distinguish between the two types. This phenomenon is known as adverse selection. On the other hand, it has been argued that even with complete information, the presence of externalities may impede efficient outcomes. To see this, consider the example of climate change. If a subset of countries agrees to curb emissions, non-participant regions benefit from the signatories’ efforts without incurring costs. These free riding opportunities give rise to incentives to strategically improve ones bargaining power that work against the formation of a global agreement. This thesis is concerned with extending our understanding of both factors, adverse selection and externalities. The findings are based on empirical evidence from original laboratory experiments as well as game theoretic modeling. On a very general note, it is demonstrated that the institutions through which agents interact matter to a large extent. Insights are provided about which institutions we should expect to perform better than others, at least in terms of aggregate welfare. Chapters 1 and 2 focus on the problem of adverse selection. Effective operation of markets and other institutions often depends on good information transmission properties. In terms of the example introduced above, a firm is only willing to offer high wages if it receives enough positive signals about the worker’s quality during the application and wage bargaining process. In Chapter 1, it will be shown that repeated interaction coupled with time costs facilitates information transmission. By making the wage bargaining process costly for the worker, the firm is able to obtain more accurate information about the worker’s type. The cost could be pure time cost from delaying agreement or cost of effort arising from a multi-step interviewing process. In Chapter 2, I abstract from time cost and show that communication can play a similar role. The simple fact that a worker states to be of high quality may be informative. In Chapter 3, the focus is on a different source of inefficiency. Agents strive for bargaining power and thus may be motivated by incentives that are at odds with the socially efficient outcome. I have already mentioned the example of climate change. Other examples are coalitions within committees that are formed to secure voting power to block outcomes or groups that commit to different technological standards although a single standard would be optimal (e.g. the format war between HD and BlueRay). It will be shown that such inefficiencies are directly linked to the presence of externalities and a certain degree of irreversibility in actions. I now discuss the three articles in more detail. In Chapter 1, Olivier Bochet and I study a simple bilateral bargaining institution that eliminates trade failures arising from incomplete information. In this setting, a buyer makes offers to a seller in order to acquire a good. Whenever an offer is rejected by the seller, the buyer may submit a further offer. Bargaining is costly, because both parties suffer a (small) time cost after any rejection. The difficulties arise, because the good can be of low or high quality and the quality of the good is only known to the seller. Indeed, without the possibility to make repeated offers, it is too risky for the buyer to offer prices that allow for trade of high quality goods. When allowing for repeated offers, however, at equilibrium both types of goods trade with probability one. We provide an experimental test of these predictions. Buyers gather information about sellers using specific price offers and rates of trade are high, much as the model’s qualitative predictions. We also observe a persistent over-delay before trade occurs, and this mitigates efficiency substantially. Possible channels for over-delay are identified in the form of two behavioral assumptions missing from the standard model, loss aversion (buyers) and haggling (sellers), which reconcile the data with the theoretical predictions. Chapter 2 also studies adverse selection, but interaction between buyers and sellers now takes place within a market rather than isolated pairs. Remarkably, in a market it suffices to let agents communicate in a very simple manner to mitigate trade failures. The key insight is that better informed agents (sellers) are willing to truthfully reveal their private information, because by doing so they are able to reduce search frictions and attract more buyers. Behavior observed in the experimental sessions closely follows the theoretical predictions. As a consequence, costless and non-binding communication (cheap talk) significantly raises rates of trade and welfare. Previous experiments have documented that cheap talk alleviates inefficiencies due to asymmetric information. These findings are explained by pro-social preferences and lie aversion. I use appropriate control treatments to show that such consideration play only a minor role in our market. Instead, the experiment highlights the ability to organize markets as a new channel through which communication can facilitate trade in the presence of private information. In Chapter 3, I theoretically explore coalition formation via multilateral bargaining under complete information. The environment studied is extremely rich in the sense that the model allows for all kinds of externalities. This is achieved by using so-called partition functions, which pin down a coalitional worth for each possible coalition in each possible coalition structure. It is found that although binding agreements can be written, efficiency is not guaranteed, because the negotiation process is inherently non-cooperative. The prospects of cooperation are shown to crucially depend on i) the degree to which players can renegotiate and gradually build up agreements and ii) the absence of a certain type of externalities that can loosely be described as incentives to free ride. Moreover, the willingness to concede bargaining power is identified as a novel reason for gradualism. Another key contribution of the study is that it identifies a strong connection between the Core, one of the most important concepts in cooperative game theory, and the set of environments for which efficiency is attained even without renegotiation.