52 resultados para embedded computing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantitative computer tomography (QCT)-based finite element (FE) models of vertebral body provide better prediction of vertebral strength than dual energy X-ray absorptiometry. However, most models were validated against compression of vertebral bodies with endplates embedded in polymethylmethalcrylate (PMMA). Yet, loading being as important as bone density, the absence of intervertebral disc (IVD) affects the strength. Accordingly, the aim was to assess the strength predictions of the classic FE models (vertebral body embedded) against the in vitro and in silico strengths of vertebral bodies loaded via IVDs. High resolution peripheral QCT (HR-pQCT) were performed on 13 segments (T11/T12/L1). T11 and L1 were augmented with PMMA and the samples were tested under a 4° wedge compression until failure of T12. Specimen-specific model was generated for each T12 from the HR-pQCT data. Two FE sets were created: FE-PMMA refers to the classical vertebral body embedded model under axial compression; FE-IVD to their loading via hyperelastic IVD model under the wedge compression as conducted experimentally. Results showed that FE-PMMA models overestimated the experimental strength and their strength prediction was satisfactory considering the different experimental set-up. On the other hand, the FE-IVD models did not prove significantly better (Exp/FE-PMMA: R²=0.68; Exp/FE-IVD: R²=0.71, p=0.84). In conclusion, FE-PMMA correlates well with in vitro strength of human vertebral bodies loaded via real IVDs and FE-IVD with hyperelastic IVDs do not significantly improve this correlation. Therefore, it seems not worth adding the IVDs to vertebral body models until fully validated patient-specific IVD models become available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Unlike previously explored relationships between the properties of hot Jovian atmospheres, the geometric albedo and the incident stellar flux do not exhibit a clear correlation, as revealed by our re-analysis of Q0-Q14 Kepler data. If the albedo is primarily associated with the presence of clouds in these irradiated atmospheres, a holistic modeling approach needs to relate the following properties: the strength of stellar irradiation (and hence the strength and depth of atmospheric circulation), the geometric albedo (which controls both the fraction of starlight absorbed and the pressure level at which it is predominantly absorbed), and the properties of the embedded cloud particles (which determine the albedo). The anticipated diversity in cloud properties renders any correlation between the geometric albedo and the stellar flux weak and characterized by considerable scatter. In the limit of vertically uniform populations of scatterers and absorbers, we use an analytical model and scaling relations to relate the temperature-pressure profile of an irradiated atmosphere and the photon deposition layer and to estimate whether a cloud particle will be lofted by atmospheric circulation. We derive an analytical formula for computing the albedo spectrum in terms of the cloud properties, which we compare to the measured albedo spectrum of HD 189733b by Evans et al. Furthermore, we show that whether an optical phase curve is flat or sinusoidal depends on whether the particles are small or large as defined by the Knudsen number. This may be an explanation for why Kepler-7b exhibits evidence for the longitudinal variation in abundance of condensates, while Kepler-12b shows no evidence for the presence of condensates despite the incident stellar flux being similar for both exoplanets. We include an "observer's cookbook" for deciphering various scenarios associated with the optical phase curve, the peak offset of the infrared phase curve, and the geometric albedo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We construct the theory of dissipative hydrodynamics of uncharged fluids living on embedded space-time surfaces to first order in a derivative expansion in the case of codimension-1 surfaces (including fluid membranes) and the theory of non-dissipative hydrodynamics to second order in a derivative expansion in the case of codimension higher than one under the assumption of no angular momenta in transverse directions to the surface. This construction includes the elastic degrees of freedom, and hence the corresponding transport coefficients, that take into account transverse fluctuations of the geometry where the fluid lives. Requiring the second law of thermodynamics to be satisfied leads us to conclude that in the case of codimension-1 surfaces the stress-energy tensor is characterized by 2 hydrodynamic and 1 elastic independent transport coefficient to first order in the expansion while for codimension higher than one, and for non-dissipative flows, the stress-energy tensor is characterized by 7 hydrodynamic and 3 elastic independent transport coefficients to second order in the expansion. Furthermore, the constraints imposed between the stress-energy tensor, the bending moment and the entropy current of the fluid by these extra non-dissipative contributions are fully captured by equilibrium partition functions. This analysis constrains the Young modulus which can be measured from gravity by elastically perturbing black branes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the current challenges in evolutionary ecology is understanding the long-term persistence of contemporary-evolving predator–prey interactions across space and time. To address this, we developed an extension of a multi-locus, multi-trait eco-evolutionary individual-based model that incorporates several interacting species in explicit landscapes. We simulated eco-evolutionary dynamics of multiple species food webs with different degrees of connectance across soil-moisture islands. A broad set of parameter combinations led to the local extinction of species, but some species persisted, and this was associated with (1) high connectance and omnivory and (2) ongoing evolution, due to multi-trait genetic variability of the embedded species. Furthermore, persistence was highest at intermediate island distances, likely because of a balance between predation-induced extinction (strongest at short island distances) and the coupling of island diversity by top predators, which by travelling among islands exert global top-down control of biodiversity. In the simulations with high genetic variation, we also found widespread trait evolutionary changes indicative of eco-evolutionary dynamics. We discuss how the ever-increasing computing power and high-resolution data availability will soon allow researchers to start bridging the in vivo–in silico gap.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Autophagy assures cellular homeostasis, and gains increasing importance in cancer, where it impacts on carcinogenesis, propagation of the malignant phenotype and development of resistance. To date, its tissue-based analysis by immunohistochemistry remains poorly standardized. Here we show the feasibility of specifically and reliably assessing the autophagy markers LC3B and p62 (SQSTM1) in formalin fixed and paraffin embedded human tissue by immunohistochemistry. Preceding functional experiments consisted of depleting LC3B and p62 in H1299 lung cancer cells with subsequent induction of autophagy. Western blot and immunofluorescence validated antibody specificity, knockdown efficiency and autophagy induction prior to fixation in formalin and embedding in paraffin. LC3B and p62 antibodies were validated on formalin fixed and paraffin embedded cell pellets of treated and control cells and finally applied on a tissue microarray with 80 human malignant and non-neoplastic lung and stomach formalin fixed and paraffin embedded tissue samples. Dot-like staining of various degrees was observed in cell pellets and 18/40 (LC3B) and 22/40 (p62) tumors, respectively. Seventeen tumors were double positive for LC3B and p62. P62 displayed additional significant cytoplasmic and nuclear staining of unknown significance. Interobserver-agreement for grading of staining intensities and patterns was substantial to excellent (kappa values 0.60 - 0.83). In summary, we present a specific and reliable IHC staining of LC3B and p62 on formalin fixed and paraffin embedded human tissue. Our presented protocol is designed to aid reliable investigation of dysregulated autophagy in solid tumors and may be used on large tissue collectives.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dieser Artikel bietet einen Überblick über die Entwicklung und Zusammenhänge der einzelnen Elemente der Fuzzy-Logik, wovon Fuzzy-Set-Theorie die Grundlage bildet. Die Grundproblematik besteht in der Handhabung von linguistischen Informationen, die häufig durch Ungenauigkeit gekennzeichnet sind. Die verschiedenen technischen Anwendungen von Fuzzy-Logik bieten eine Möglichkeit, intelligentere Computersysteme zu konstruieren, die mit unpräzisen Informationen umgehen können. Solche Systeme sind Indizien für die Entstehung einer neuen Ära des Cognitive-Computing, di in diesemArtikel ebenfalls zur Sprache kommt. Für das bessere Verständnis wird der Artikel mit einem Beispiel aus der Meteorologie (d. h. Schnee in Adelboden) begleitet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The article proposes granular computing as a theoretical, formal and methodological basis for the newly emerging research field of human–data interaction (HDI). We argue that the ability to represent and reason with information granules is a prerequisite for data legibility. As such, it allows for extending the research agenda of HDI to encompass the topic of collective intelligence amplification, which is seen as an opportunity of today’s increasingly pervasive computing environments. As an example of collective intelligence amplification in HDI, we introduce a collaborative urban planning use case in a cognitive city environment and show how an iterative process of user input and human-oriented automated data processing can support collective decision making. As a basis for automated human-oriented data processing, we use the spatial granular calculus of granular geometry.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present BitWorker, a platform for community distributed computing based on BitTorrent. Any splittable task can be easily specified by a user in a meta-information task file, such that it can be downloaded and performed by other volunteers. Peers find each other using Distributed Hash Tables, download existing results, and compute missing ones. Unlike existing distributed computing schemes relying on centralized coordination point(s), our scheme is totally distributed, therefore, highly robust. We evaluate the performance of BitWorker using mathematical models and real tests, showing processing and robustness gains. BitWorker is available for download and use by the community.