134 resultados para problem complexity
em Université de Lausanne, Switzerland
Resumo:
This article builds on the recent policy diffusion literature and attempts to overcome one of its major problems, namely the lack of a coherent theoretical framework. The literature defines policy diffusion as a process where policy choices are interdependent, and identifies several diffusion mechanisms that specify the link between the policy choices of the various actors. As these mechanisms are grounded in different theories, theoretical accounts of diffusion currently have little internal coherence. In this article we put forward an expected-utility model of policy change that is able to subsume all the diffusion mechanisms. We argue that the expected utility of a policy depends on both its effectiveness and the payoffs it yields, and we show that the various diffusion mechanisms operate by altering these two parameters. Each mechanism affects one of the two parameters, and does so in distinct ways. To account for aggregate patterns of diffusion, we embed our model in a simple threshold model of diffusion. Given the high complexity of the process that results, strong analytical conclusions on aggregate patterns cannot be drawn without more extensive analysis which is beyond the scope of this article. However, preliminary considerations indicate that a wide range of diffusion processes may exist and that convergence is only one possible outcome.
Resumo:
Depth-averaged velocities and unit discharges within a 30 km reach of one of the world's largest rivers, the Rio Parana, Argentina, were simulated using three hydrodynamic models with different process representations: a reduced complexity (RC) model that neglects most of the physics governing fluid flow, a two-dimensional model based on the shallow water equations, and a three-dimensional model based on the Reynolds-averaged Navier-Stokes equations. Row characteristics simulated using all three models were compared with data obtained by acoustic Doppler current profiler surveys at four cross sections within the study reach. This analysis demonstrates that, surprisingly, the performance of the RC model is generally equal to, and in some instances better than, that of the physics based models in terms of the statistical agreement between simulated and measured flow properties. In addition, in contrast to previous applications of RC models, the present study demonstrates that the RC model can successfully predict measured flow velocities. The strong performance of the RC model reflects, in part, the simplicity of the depth-averaged mean flow patterns within the study reach and the dominant role of channel-scale topographic features in controlling the flow dynamics. Moreover, the very low water surface slopes that typify large sand-bed rivers enable flow depths to be estimated reliably in the RC model using a simple fixed-lid planar water surface approximation. This approach overcomes a major problem encountered in the application of RC models in environments characterised by shallow flows and steep bed gradients. The RC model is four orders of magnitude faster than the physics based models when performing steady-state hydrodynamic calculations. However, the iterative nature of the RC model calculations implies a reduction in computational efficiency relative to some other RC models. A further implication of this is that, if used to simulate channel morphodynamics, the present RC model may offer only a marginal advantage in terms of computational efficiency over approaches based on the shallow water equations. These observations illustrate the trade off between model realism and efficiency that is a key consideration in RC modelling. Moreover, this outcome highlights a need to rethink the use of RC morphodynamic models in fluvial geomorphology and to move away from existing grid-based approaches, such as the popular cellular automata (CA) models, that remain essentially reductionist in nature. In the case of the world's largest sand-bed rivers, this might be achieved by implementing the RC model outlined here as one element within a hierarchical modelling framework that would enable computationally efficient simulation of the morphodynamics of large rivers over millennial time scales. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Neuroblastoma (NB) is a neural crest-derived childhood tumor characterized by a remarkable phenotypic diversity, ranging from spontaneous regression to fatal metastatic disease. Although the cancer stem cell (CSC) model provides a trail to characterize the cells responsible for tumor onset, the NB tumor-initiating cell (TIC) has not been identified. In this study, the relevance of the CSC model in NB was investigated by taking advantage of typical functional stem cell characteristics. A predictive association was established between self-renewal, as assessed by serial sphere formation, and clinical aggressiveness in primary tumors. Moreover, cell subsets gradually selected during serial sphere culture harbored increased in vivo tumorigenicity, only highlighted in an orthotopic microenvironment. A microarray time course analysis of serial spheres passages from metastatic cells allowed us to specifically "profile" the NB stem cell-like phenotype and to identify CD133, ABC transporter, and WNT and NOTCH genes as spheres markers. On the basis of combined sphere markers expression, at least two distinct tumorigenic cell subpopulations were identified, also shown to preexist in primary NB. However, sphere markers-mediated cell sorting of parental tumor failed to recapitulate the TIC phenotype in the orthotopic model, highlighting the complexity of the CSC model. Our data support the NB stem-like cells as a dynamic and heterogeneous cell population strongly dependent on microenvironmental signals and add novel candidate genes as potential therapeutic targets in the control of high-risk NB.
Resumo:
OBJECTIVES: To document biopsychosocial profiles of patients with rheumatoid arthritis (RA) by means of the INTERMED and to correlate the results with conventional methods of disease assessment and health care utilization. METHODS: Patients with RA (n = 75) were evaluated with the INTERMED, an instrument for assessing case complexity and care needs. Based on their INTERMED scores, patients were compared with regard to severity of illness, functional status, and health care utilization. RESULTS: In cluster analysis, a 2-cluster solution emerged, with about half of the patients characterized as complex. Complex patients scoring especially high in the psychosocial domain of the INTERMED were disabled significantly more often and took more psychotropic drugs. Although the 2 patient groups did not differ in severity of illness and functional status, complex patients rated their illness as more severe on subjective measures and on most items of the Medical Outcomes Study Short Form 36. Complex patients showed increased health care utilization despite a similar biologic profile. CONCLUSIONS: The INTERMED identified complex patients with increased health care utilization, provided meaningful and comprehensive patient information, and proved to be easy to implement and advantageous compared with conventional methods of disease assessment. Intervention studies will have to demonstrate whether management strategies based on INTERMED profiles can improve treatment response and outcome of complex patients.
Resumo:
Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.
Resumo:
Eusociality is taxonomically rare, yet associated with great ecological success. Surprisingly, studies of environmental conditions favouring eusociality are often contradictory. Harsh conditions associated with increasing altitude and latitude seem to favour increased sociality in bumblebees and ants, but the reverse pattern is found in halictid bees and polistine wasps. Here, we compare the life histories and distributions of populations of 176 species of Hymenoptera from the Swiss Alps. We show that differences in altitudinal distributions and development times among social forms can explain these contrasting patterns: highly social taxa develop more quickly than intermediate social taxa, and are thus able to complete the reproductive cycle in shorter seasons at higher elevations. This dual impact of altitude and development time on sociality illustrates that ecological constraints can elicit dynamic shifts in behaviour, and helps explain the complex distribution of sociality across ecological gradients.
Resumo:
Improvement of nerve regeneration and functional recovery following nerve injury is a challenging problem in clinical research. We have already shown that following rat sciatic nerve transection, the local administration of triiodothyronine (T3) significantly increased the number and the myelination of regenerated axons. Functional recovery is a sum of the number of regenerated axons and reinnervation of denervated peripheral targets. In the present study, we investigated whether the increased number of regenerated axons by T3-treatment is linked to improved reinnervation of hind limb muscles. After transection of rat sciatic nerves, silicone or biodegradable nerve guides were implanted and filled with either T3 or phosphate buffer solution (PBS). Neuromuscular junctions (NMJs) were analyzed on gastrocnemius and plantar muscle sections stained with rhodamine alpha-bungarotoxin and neurofilament antibody. Four weeks after surgery, most end-plates (EPs) of operated limbs were still denervated and no effect of T3 on muscle reinnervation was detected at this stage of nerve repair. In contrast, after 14 weeks of nerve regeneration, T3 clearly enhanced the reinnervation of gastrocnemius and plantar EPs, demonstrated by significantly higher recovery of size and shape complexity of reinnervated EPs and also by increased acetylcholine receptor (AChRs) density on post synaptic membranes compared to PBS-treated EPs. The stimulating effect of T3 on EP reinnervation is confirmed by a higher index of compound muscle action potentials recorded in gastrocnemius muscles. In conclusion, our results provide for the first time strong evidence that T3 enhances the restoration of NMJ structure and improves synaptic transmission.
Resumo:
In this paper we propose a stabilized conforming finite volume element method for the Stokes equations. On stating the convergence of the method, optimal a priori error estimates in different norms are obtained by establishing the adequate connection between the finite volume and stabilized finite element formulations. A superconvergence result is also derived by using a postprocessing projection method. In particular, the stabilization of the continuous lowest equal order pair finite volume element discretization is achieved by enriching the velocity space with local functions that do not necessarily vanish on the element boundaries. Finally, some numerical experiments that confirm the predicted behavior of the method are provided.
Resumo:
Human perception of bitterness displays pronounced interindividual variation. This phenotypic variation is mirrored by equally pronounced genetic variation in the family of bitter taste receptor genes. To better understand the effects of common genetic variations on human bitter taste perception, we conducted a genome-wide association study on a discovery panel of 504 subjects and a validation panel of 104 subjects from the general population of São Paulo in Brazil. Correction for general taste-sensitivity allowed us to identify a SNP in the cluster of bitter taste receptors on chr12 (10.88- 11.24 Mb, build 36.1) significantly associated (best SNP: rs2708377, P = 5.31 × 10(-13), r(2) = 8.9%, β = -0.12, s.e. = 0.016) with the perceived bitterness of caffeine. This association overlaps with-but is statistically distinct from-the previously identified SNP rs10772420 influencing the perception of quinine bitterness that falls in the same bitter taste cluster. We replicated this association to quinine perception (P = 4.97 × 10(-37), r(2) = 23.2%, β = 0.25, s.e. = 0.020) and additionally found the effect of this genetic locus to be concentration specific with a strong impact on the perception of low, but no impact on the perception of high concentrations of quinine. Our study, thus, furthers our understanding of the complex genetic architecture of bitter taste perception.
Resumo:
Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence-environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence-environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building 'under fit' models, having insufficient flexibility to describe observed occurrence-environment relationships, we risk misunderstanding the factors shaping species distributions. By building 'over fit' models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.
Resumo:
We present a novel spatiotemporal-adaptive Multiscale Finite Volume (MsFV) method, which is based on the natural idea that the global coarse-scale problem has longer characteristic time than the local fine-scale problems. As a consequence, the global problem can be solved with larger time steps than the local problems. In contrast to the pressure-transport splitting usually employed in the standard MsFV approach, we propose to start directly with a local-global splitting that allows to locally retain the original degree of coupling. This is crucial for highly non-linear systems or in the presence of physical instabilities. To obtain an accurate and efficient algorithm, we devise new adaptive criteria for global update that are based on changes of coarse-scale quantities rather than on fine-scale quantities, as it is routinely done before in the adaptive MsFV method. By means of a complexity analysis we show that the adaptive approach gives a noticeable speed-up with respect to the standard MsFV algorithm. In particular, it is efficient in case of large upscaling factors, which is important for multiphysics problems. Based on the observation that local time stepping acts as a smoother, we devise a self-correcting algorithm which incorporates the information from previous times to improve the quality of the multiscale approximation. We present results of multiphase flow simulations both for Darcy-scale and multiphysics (hybrid) problems, in which a local pore-scale description is combined with a global Darcy-like description. The novel spatiotemporal-adaptive multiscale method based on the local-global splitting is not limited to porous media flow problems, but it can be extended to any system described by a set of conservation equations.