931 resultados para mikro CHP
Support of hepatic regeneration by trophic factors from liver-derived mesenchymal stromal/stem cells
Resumo:
Mesenchymal stromal/stem cells (MSCs) have multilineage differentiation potential and as such are known to promote regeneration in response to tissue injury. However, accumulating evidence indicates that the regenerative capacity of MSCs is not via transdifferentiation but mediated by their production of trophic and other factors that promote endogenous regeneration pathways of the tissue cells. In this chapter, we provide a detailed description on how to obtain trophic factors secreted by cultured MSCs and how they can be used in small animal models. More specific, in vivo models to study the paracrine effects of MSCs on regeneration of the liver after surgical resection and/or ischemia and reperfusion injury are described.
Resumo:
In this paper we introduce a class of descriptors for regular languages arising from an application of the Stone duality between finite Boolean algebras and finite sets. These descriptors, called classical fortresses, are object specified in classical propositional logic and capable to accept exactly regular languages. To prove this, we show that the languages accepted by classical fortresses and deterministic finite automata coincide. Classical fortresses, besides being propositional descriptors for regular languages, also turn out to be an efficient tool for providing alternative and intuitive proofs for the closure properties of regular languages.
Resumo:
The most influential theoretical account in time psychophysics assumes the existence of a unitary internal clock based on neural counting. The distinct timing hypothesis, on the other hand, suggests an automatic timing mechanism for processing of durations in the sub-second range and a cognitively controlled timing mechanism for processing of durations in the range of seconds. Although several psychophysical approaches can be applied for identifying the internal structure of interval timing in the second and sub-second range, the existing data provide a puzzling picture of rather inconsistent results. In the present chapter, we introduce confirmatory factor analysis (CFA) to further elucidate the internal structure of interval timing performance in the sub-second and second range. More specifically, we investigated whether CFA would rather support the notion of a unitary timing mechanism or of distinct timing mechanisms underlying interval timing in the sub-second and second range, respectively. The assumption of two distinct timing mechanisms which are completely independent of each other was not supported by our data. The model assuming a unitary timing mechanism underlying interval timing in both the sub-second and second range fitted the empirical data much better. Eventually, we also tested a third model assuming two distinct, but functionally related mechanisms. The correlation between the two latent variables representing the hypothesized timing mechanisms was rather high and comparison of fit indices indicated that the assumption of two associated timing mechanisms described the observed data better than only one latent variable. Models are discussed in the light of the existing psychophysical and neurophysiological data.
Resumo:
Results on the effectiveness of psychosocial treatments for patients with comorbid psychiatric and substance use disorders (dual disorders) will be discussed based on relevant meta-analyses and comprehensive reviews. Findings pertaining to severe (e.g., schizophrenia) and mild to moderate (e.g., anxiety disorders) dual disorders will be presented. The heterogeneity in patient characteristics, treatments, settings, and measured outcomes within the studies hinders the extraction of simple conclusions regarding how to effectively integrate psychiatric and addiction-oriented services into one psychosocial treatment. However, promising treatment strategies and interventions include integrative programs that comprise motivational interviewing; disorder-specific cognitive-behavioral interventions; substance use reduction interventions such as relapse prevention or contingency management; and/or family interventions. Such programs are generally superior to control groups (e.g., waiting list, treatment as usual) and are sometimes superior to other active treatments (e.g., skills training) in outcomes of substance use, psychiatric disorders, and social functioning.
Resumo:
Although research and clinical interventions for patients with dual disorders have been described since as early as the 1980s, the day-to-day treatment of these patients remains problematic and challenging in many countries. Throughout this book, many approaches and possible pathways have been outlined. Based upon these experiences, some key points can be extracted in order to guide to future developments. (1) New diagnostic approaches are warranted when dealing with patients who have multiple problems, given the limitations of the current categorical systems. (2) Greater emphasis should be placed on secondary prevention and early intervention for children and adolescents at an increased risk of later-life dual disorders. (3) Mental, addiction, and somatic care systems can be integrated, adopting a patient-focused approach to care delivery. (4) Recovery should be taken into consideration when defining treatment intervention and outcome goals. (5) It is important to reduce societal risk factors, such as poverty and early childhood adversity. (6) More resources are needed to provide adequate mental health care in the various countries. The development of European guidance initiatives would provide benefits in many of these areas, making it possible to ensure a more harmonized standard of care for patients with dual disorders.
Resumo:
Behavioral addictions are highly prevalent and have a major individual and societal impact. Moreover, given the availability and increase of potentially addictive activities in our societal development (e.g. internet, gaming, online pornography) an increase in these types of behavioral disorders is very likely. Gambling Disorders are best studied among the non-chemical addictions. However, effective treatment interventions need to be further developed, in particular for Internet Addiction. Most of the available evidence supports behavioral interventions as first line treatment. Specifically for Gambling Disorder, pharmacotherapy can be an useful augmentation.. Psychiatric comorbidities are frequent in patients with behavioral addictions and negatively affect the course of non-substance-related disorders. Concurrent treatment of these comorbid disorders is advised, although there is a clear need of conducting studies evaluating the effectiveness of integrated treatment approaches.
Resumo:
Femoroacetabular impingement (FAI) before or after Periacetabular Osteotomy (PAO) is surprisingly frequent and surgeons need to be aware of the risk preoperatively and be able to avoid it intraoperatively. In this paper we present a novel computer assisted planning and navigation system for PAO with impingement analysis and range of motion (ROM) optimization. Our system starts with a fully automatic detection of the acetabular rim, which allows for quantifying the acetabular morphology with parameters such as acetabular version, inclination and femoral head coverage ratio for a computer assisted diagnosis and planning. The planned situation was optimized with impingement simulation by balancing acetabuar coverage with ROM. Intra-operatively navigation was conducted until the optimized planning situation was achieved. Our experimental results demonstrated: 1) The fully automated acetabular rim detection was validated with accuracy 1.1 ± 0.7mm; 2) The optimized PAO planning improved ROM significantly compared to that without ROM optimization; 3) By comparing the pre-operatively planned situation and the intra-operatively achieved situation, sub-degree accuracy was achieved for all directions.
Resumo:
Software architecture is the result of a design effort aimed at ensuring a certain set of quality attributes. As we show, quality requirements are commonly specified in practice but are rarely validated using automated techniques. In this paper we analyze and classify commonly specified quality requirements after interviewing professionals and running a survey. We report on tools used to validate those requirements and comment on the obstacles encountered by practitioners when performing such activity (e.g., insufficient tool-support; poor understanding of users needs). Finally we discuss opportunities for increasing the adoption of automated tools based on the information we collected during our study (e.g., using a business-readable notation for expressing quality requirements; increasing awareness by monitoring non-functional aspects of a system).
Resumo:
Debuggers are crucial tools for developing object-oriented software systems as they give developers direct access to the running systems. Nevertheless, traditional debuggers rely on generic mechanisms to explore and exhibit the execution stack and system state, while developers reason about and formulate domain-specific questions using concepts and abstractions from their application domains. This creates an abstraction gap between the debugging needs and the debugging support leading to an inefficient and error-prone debugging effort. To reduce this gap, we propose a framework for developing domain-specific debuggers called the Moldable Debugger. The Moldable Debugger is adapted to a domain by creating and combining domain-specific debugging operations with domain-specific debugging views, and adapts itself to a domain by selecting, at run time, appropriate debugging operations and views. We motivate the need for domain-specific debugging, identify a set of key requirements and show how our approach improves debugging by adapting the debugger to several domains.
Resumo:
Imprecise manipulation of source code (semi-parsing) is useful for tasks such as robust parsing, error recovery, lexical analysis, and rapid development of parsers for data extraction. An island grammar precisely defines only a subset of a language syntax (islands), while the rest of the syntax (water) is defined imprecisely. Usually, water is defined as the negation of islands. Albeit simple, such a definition of water is naive and impedes composition of islands. When developing an island grammar, sooner or later a programmer has to create water tailored to each individual island. Such an approach is fragile, however, because water can change with any change of a grammar. It is time-consuming, because water is defined manually by a programmer and not automatically. Finally, an island surrounded by water cannot be reused because water has to be defined for every grammar individually. In this paper we propose a new technique of island parsing - bounded seas. Bounded seas are composable, robust, reusable and easy to use because island-specific water is created automatically. We integrated bounded seas into a parser combinator framework as a demonstration of their composability and reusability.
Resumo:
Patients with amnestic mild cognitive impairment are at high risk for developing Alzheimer's disease. Besides episodic memory dysfunction they show deficits in accessing contextual knowledge that further specifies a general spatial navigation task or an executive function (EF) virtual action planning. Virtual reality (VR) environments have already been successfully used in cognitive rehabilitation and show increased potential for use in neuropsychological evaluation allowing for greater ecological validity while being more engaging and user friendly. In our study we employed the in-house platform of virtual action planning museum (VAP-M) and a sample of 25 MCI and 25 controls, in order to investigate deficits in spatial navigation, prospective memory, and executive function. In addition, we used the morphology of late components in event-related potential (ERP) responses, as a marker for cognitive dysfunction. The related measurements were fed to a common classification scheme facilitating the direct comparison of both approaches. Our results indicate that both the VAP-M and ERP averages were able to differentiate between healthy elders and patients with amnestic mild cognitive impairment and agree with the findings of the virtual action planning supermarket (VAP-S). The sensitivity (specificity) was 100% (98%) for the VAP-M data and 87% (90%) for the ERP responses. Considering that ERPs have proven to advance the early detection and diagnosis of "presymptomatic AD," the suggested VAP-M platform appears as an appealing alternative.
Resumo:
Real cameras have a limited depth of field. The resulting defocus blur is a valuable cue for estimating the depth structure of a scene. Using coded apertures, depth can be estimated from a single frame. For optical flow estimation between frames, however, the depth dependent degradation can introduce errors. These errors are most prominent when objects move relative to the focal plane of the camera. We incorporate coded aperture defocus blur into optical flow estimation and allow for piecewise smooth 3D motion of objects. With coded aperture flow, we can establish dense correspondences between pixels in succeeding coded aperture frames. We compare several approaches to compute accurate correspondences for coded aperture images showing objects with arbitrary 3D motion.
Resumo:
We present a novel approach to the reconstruction of depth from light field data. Our method uses dictionary representations and group sparsity constraints to derive a convex formulation. Although our solution results in an increase of the problem dimensionality, we keep numerical complexity at bay by restricting the space of solutions and by exploiting an efficient Primal-Dual formulation. Comparisons with state of the art techniques, on both synthetic and real data, show promising performances.
Resumo:
In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.
Resumo:
Extraction of both pelvic and femoral surface models of a hip joint from CT data for computer-assisted pre-operative planning of hip arthroscopy is addressed. We present a method for a fully automatic image segmentation of a hip joint. Our method works by combining fast random forest (RF) regression based landmark detection, atlas-based segmentation, with articulated statistical shape model (aSSM) based hip joint reconstruction. The two fundamental contributions of our method are: (1) An improved fast Gaussian transform (IFGT) is used within the RF regression framework for a fast and accurate landmark detection, which then allows for a fully automatic initialization of the atlas-based segmentation; and (2) aSSM based fitting is used to preserve hip joint structure and to avoid penetration between the pelvic and femoral models. Validation on 30 hip CT images show that our method achieves high performance in segmenting pelvis, left proximal femur, and right proximal femur surfaces with an average accuracy of 0.59 mm, 0.62 mm, and 0.58 mm, respectively.