895 resultados para Model Driven Software Development, Arduino, Meta-Modeling, Domain Specific Languages, Software Factory


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Each plasma physics laboratory has a proprietary scheme to control and data acquisition system. Usually, it is different from one laboratory to another. It means that each laboratory has its own way to control the experiment and retrieving data from the database. Fusion research relies to a great extent on international collaboration and this private system makes it difficult to follow the work remotely. The TCABR data analysis and acquisition system has been upgraded to support a joint research programme using remote participation technologies. The choice of MDSplus (Model Driven System plus) is proved by the fact that it is widely utilized, and the scientists from different institutions may use the same system in different experiments in different tokamaks without the need to know how each system treats its acquisition system and data analysis. Another important point is the fact that the MDSplus has a library system that allows communication between different types of language (JAVA, Fortran, C, C++, Python) and programs such as MATLAB, IDL, OCTAVE. In the case of tokamak TCABR interfaces (object of this paper) between the system already in use and MDSplus were developed, instead of using the MDSplus at all stages, from the control, and data acquisition to the data analysis. This was done in the way to preserve a complex system already in operation and otherwise it would take a long time to migrate. This implementation also allows add new components using the MDSplus fully at all stages. (c) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. Results: The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. Conclusions: We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We studied locomotor activity rhythms of C57/Bl6 mice under a chronic jet lag (CJL) protocol (ChrA(6/2)), which consisted of 6-hour phase advances of the light-dark schedule (LD) every 2 days. Through periodogram analysis, we found 2 components of the activity rhythm: a short-period component (21.01 +/- 0.04 h) that was entrained by the LD schedule and a long-period component (24.68 +/- 0.26 h). We developed a mathematical model comprising 2 coupled circadian oscillators that was tested experimentally with different CJL schedules. Our simulations suggested that under CJL, the system behaves as if it were under a zeitgeber with a period determined by (24 -[phase shift size/days between shifts]). Desynchronization within the system arises according to whether this effective zeitgeber is inside or outside the range of entrainment of the oscillators. In this sense, ChrA(6/2) is interpreted as a (24 - 6/2 = 21 h) zeitgeber, and simulations predicted the behavior of mice under other CJL schedules with an effective 21-hour zeitgeber. Animals studied under an asymmetric T = 21 h zeitgeber (carried out by a 3-hour shortening of every dark phase) showed 2 activity components as observed under ChrA(6/2): an entrained short-period (21.01 +/- 0.03 h) and a long-period component (23.93 +/- 0.31 h). Internal desynchronization was lost when mice were subjected to 9-hour advances every 3 days, a possibility also contemplated by the simulations. Simulations also predicted that desynchronization should be less prevalent under delaying than under advancing CJL. Indeed, most mice subjected to 6-hour delay shifts every 2 days (an effective 27-hour zeitgeber) displayed a single entrained activity component (26.92 +/- 0.11 h). Our results demonstrate that the disruption provoked by CJL schedules is not dependent on the phase-shift magnitude or the frequency of the shifts separately but on the combination of both, through its ratio and additionally on their absolute values. In this study, we present a novel model of forced desynchronization in mice under a specific CJL schedule; in addition, our model provides theoretical tools for the evaluation of circadian disruption under CJL conditions that are currently used in circadian research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large percentage of the industrial SMEs has an organizational structure for product development too far from the adequate practices and models, elaborated by renowned authors with expertise in the theme of product development. On the other hand, the authors state that SMEs obtain considerable advantages by adopting a model of product development process (PDP) management. Healt is one of the most innovative sectors in the world, and countries like Brazil and Colombia are transitioning from a system that cares for contagious infecttions diseases where the drug product is the main form of treatment - to a system that cares for chronic degenerative conditions - where the equipment, including hospital furniture, has more relevance to the treatment. This change is offering better opportunities of specialized markets to hospital furniture SMEs that adopt an adquate PDF model. The present study proposes a first outline of a model of PDP management for industrial metal-mechanical SMEs that develop and manufacture hospital furniture, from a review of models proposed for great mechanical area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of this work were apply and provide a preliminary evaluation of the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) performance, for Londrina region. We performed comparison with measurements obtained in meteorological stations. The model was configured to run with three domains with 27,9 and 3 km of grid resolution, using the ndown program and also was realized a simulation with the model configured to run with a single domain using a land use file based in a classified image for region of MODIS sensor. The emission files to supply the chemistry run were generated based in the work of Martins et al., 2012. RADM2 chemical mechanism and MADE/SORGAM modal aerosol models were used in the simulations. The results demonstrated that model was able to represent coherently the formation and dispersion of the pollution in Metropolitan Region of Londrina and also the importance of using the appropriate land use file for the region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[ES] Los graves problemas territoriales existentes en Canarias han motivado un continuo desarrollo normativo autonómico desde los ochenta que culminó con la aprobación en 2000 de un Texto Refundido de las leyes de ordenación del territorio en el que se intentó estructurar jerárquicamente la ordenación territorial, el planeamiento urbanístico y la protección de los espacios naturales. Además de dotarse con instrumentos propios de ordenación, la presente década se ha caracterizado por la definición normativa de un modelo de desarrollo sostenible a través de la formulación de unas Directrices de ordenación. En este artículo abordamos el análisis de las características de la ordenación territorial en Canarias y del modelo de desarrollo promulgado, con el fin de manifestar su escasa capacidad de intervención al mismo tiempo que aportamos una reflexión sobre las nuevas orientaciones anti-cíclicas aprobadas recientemente y que pueden suponer el desmoronamiento de parte del modelo de desarrollo construido en la presente década.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[ES] El valle de La Aldea, al oeste de Gran Canaria, se dedica a la agricultura intensiva en un clima semi-árido. El agua de riego proviene de aguas superficiales y subterráneas. El acuífero está aislado del resto de la isla por el borde impermeable de la Caldera de Tejeda. El aluvial principal de La Aldea se comporta como un depósito de almacenamiento de agua que se llena y vacía, con un tiempo medio de renovación de aproximadamente 2 años. Las aguas subterráneas muestran una alta salinidad de origen natural, debido a la evapoconcentración de la deposición atmosférica y la interacción agua-roca, y antropogénica debida a los retornos de riego que producen contenidos en nitratos que pueden alcanzar los 700 mg/L. Se ha establecido un modelo conceptual de funcionamiento del acuífero y se han cuantificado los términos del balance de agua. El uso actual del acuífero está en conflicto con los requerimientos de la Directiva Marco del Agua (DMA). Sin embargo, dado que su uso es clave para el desarrollo económico del valle de La Aldea en particular, cabe plantear las excepciones legales específicas previstas en la DMA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, it is clear that the target of creating a sustainable future for the next generations requires to re-think the industrial application of chemistry. It is also evident that more sustainable chemical processes may be economically convenient, in comparison with the conventional ones, because fewer by-products means lower costs for raw materials, for separation and for disposal treatments; but also it implies an increase of productivity and, as a consequence, smaller reactors can be used. In addition, an indirect gain could derive from the better public image of the company, marketing sustainable products or processes. In this context, oxidation reactions play a major role, being the tool for the production of huge quantities of chemical intermediates and specialties. Potentially, the impact of these productions on the environment could have been much worse than it is, if a continuous efforts hadn’t been spent to improve the technologies employed. Substantial technological innovations have driven the development of new catalytic systems, the improvement of reactions and process technologies, contributing to move the chemical industry in the direction of a more sustainable and ecological approach. The roadmap for the application of these concepts includes new synthetic strategies, alternative reactants, catalysts heterogenisation and innovative reactor configurations and process design. Actually, in order to implement all these ideas into real projects, the development of more efficient reactions is one primary target. Yield, selectivity and space-time yield are the right metrics for evaluating the reaction efficiency. In the case of catalytic selective oxidation, the control of selectivity has always been the principal issue, because the formation of total oxidation products (carbon oxides) is thermodynamically more favoured than the formation of the desired, partially oxidized compound. As a matter of fact, only in few oxidation reactions a total, or close to total, conversion is achieved, and usually the selectivity is limited by the formation of by-products or co-products, that often implies unfavourable process economics; moreover, sometimes the cost of the oxidant further penalizes the process. During my PhD work, I have investigated four reactions that are emblematic of the new approaches used in the chemical industry. In the Part A of my thesis, a new process aimed at a more sustainable production of menadione (vitamin K3) is described. The “greener” approach includes the use of hydrogen peroxide in place of chromate (from a stoichiometric oxidation to a catalytic oxidation), also avoiding the production of dangerous waste. Moreover, I have studied the possibility of using an heterogeneous catalytic system, able to efficiently activate hydrogen peroxide. Indeed, the overall process would be carried out in two different steps: the first is the methylation of 1-naphthol with methanol to yield 2-methyl-1-naphthol, the second one is the oxidation of the latter compound to menadione. The catalyst for this latter step, the reaction object of my investigation, consists of Nb2O5-SiO2 prepared with the sol-gel technique. The catalytic tests were first carried out under conditions that simulate the in-situ generation of hydrogen peroxide, that means using a low concentration of the oxidant. Then, experiments were carried out using higher hydrogen peroxide concentration. The study of the reaction mechanism was fundamental to get indications about the best operative conditions, and improve the selectivity to menadione. In the Part B, I explored the direct oxidation of benzene to phenol with hydrogen peroxide. The industrial process for phenol is the oxidation of cumene with oxygen, that also co-produces acetone. This can be considered a case of how economics could drive the sustainability issue; in fact, the new process allowing to obtain directly phenol, besides avoiding the co-production of acetone (a burden for phenol, because the market requirements for the two products are quite different), might be economically convenient with respect to the conventional process, if a high selectivity to phenol were obtained. Titanium silicalite-1 (TS-1) is the catalyst chosen for this reaction. Comparing the reactivity results obtained with some TS-1 samples having different chemical-physical properties, and analyzing in detail the effect of the more important reaction parameters, we could formulate some hypothesis concerning the reaction network and mechanism. Part C of my thesis deals with the hydroxylation of phenol to hydroquinone and catechol. This reaction is already industrially applied but, for economical reason, an improvement of the selectivity to the para di-hydroxilated compound and a decrease of the selectivity to the ortho isomer would be desirable. Also in this case, the catalyst used was the TS-1. The aim of my research was to find out a method to control the selectivity ratio between the two isomers, and finally to make the industrial process more flexible, in order to adapt the process performance in function of fluctuations of the market requirements. The reaction was carried out in both a batch stirred reactor and in a re-circulating fixed-bed reactor. In the first system, the effect of various reaction parameters on catalytic behaviour was investigated: type of solvent or co-solvent, and particle size. With the second reactor type, I investigated the possibility to use a continuous system, and the catalyst shaped in extrudates (instead of powder), in order to avoid the catalyst filtration step. Finally, part D deals with the study of a new process for the valorisation of glycerol, by means of transformation into valuable chemicals. This molecule is nowadays produced in big amount, being a co-product in biodiesel synthesis; therefore, it is considered a raw material from renewable resources (a bio-platform molecule). Initially, we tested the oxidation of glycerol in the liquid-phase, with hydrogen peroxide and TS-1. However, results achieved were not satisfactory. Then we investigated the gas-phase transformation of glycerol into acrylic acid, with the intermediate formation of acrolein; the latter can be obtained by dehydration of glycerol, and then can be oxidized into acrylic acid. Actually, the oxidation step from acrolein to acrylic acid is already optimized at an industrial level; therefore, we decided to investigate in depth the first step of the process. I studied the reactivity of heterogeneous acid catalysts based on sulphated zirconia. Tests were carried out both in aerobic and anaerobic conditions, in order to investigate the effect of oxygen on the catalyst deactivation rate (one main problem usually met in glycerol dehydration). Finally, I studied the reactivity of bifunctional systems, made of Keggin-type polyoxometalates, either alone or supported over sulphated zirconia, in this way combining the acid functionality (necessary for the dehydrative step) with the redox one (necessary for the oxidative step). In conclusion, during my PhD work I investigated reactions that apply the “green chemistry” rules and strategies; in particular, I studied new greener approaches for the synthesis of chemicals (Part A and Part B), the optimisation of reaction parameters to make the oxidation process more flexible (Part C), and the use of a bioplatform molecule for the synthesis of a chemical intermediate (Part D).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Investigations on formation and specification of neural precursor cells in the central nervous system of the Drosophila melanogaster embryoSpecification of a unique cell fate during development of a multicellular organism often is a function of its position. The Drosophila central nervous system (CNS) provides an ideal system to dissect signalling events during development that lead to cell specific patterns. Different cell types in the CNS are formed from a relatively few precursor cells, the neuroblasts (NBs), which delaminate from the neurogenic region of the ectoderm. The delamination occurs in five waves, S1-S5, finally leading to a subepidermal layer consisting of about 30 NBs, each with a unique identity, arranged in a stereotyped spatial pattern in each hemisegment. This information depends on several factors such as the concentrations of various morphogens, cell-cell interactions and long range signals present at the position and time of its birth. The early NBs, delaminating during S1 and S2, form an orthogonal array of four rows (2/3,4,5,6/7) and three columns (medial, intermediate, and lateral) . However, the three column and four row-arrangement pattern is only transitory during early stages of neurogenesis which is obscured by late emerging (S3-S5) neuroblasts (Doe and Goodman, 1985; Goodman and Doe, 1993). Therefore the aim of my study has been to identify novel genes which play a role in the formation or specification of late delaminating NBs.In this study the gene anterior open or yan was picked up in a genetic screen to identity novel and yet unidentified genes in the process of late neuroblast formation and specification. I have shown that the gene yan is responsible for maintaining the cells of the neuroectoderm in an undifferentiated state by interfering with the Notch signalling mechanism. Secondly, I have studied the function and interactions of segment polarity genes within a certain neuroectodermal region, namely the engrailed (en) expressing domain, with regard to the fate specification of a set of late neuroblasts, namely NB 6-4 and NB 7-3. I have dissected the regulatory interaction of the segment polarity genes wingless (wg), hedgehog (hh) and engrailed (en) as they maintain each other’s expression to show that En is a prerequisite for neurogenesis and show that the interplay of the segmentation genes naked (nkd) and gooseberry (gsb), both of which are targets of wingless (wg) activity, leads to differential commitment of NB 7-3 and NB 6-4 cell fate. I have shown that in the absence of either nkd or gsb one NB fate is replaced by the other. However, the temporal sequence of delamination is maintained, suggesting that formation and specification of these two NBs are under independent control.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Zusammenfassung Ein 3-dimensionales globales Modell der unterenAtmosphäre wurde für die Untersuchung derOzonchemie, sowie der Chemie des Hydroxylradikals (OH) undwichtiger Vorläufersubstanzen, wie reaktiverStickstoffverbindungen und Kohlenwasserstoffe, verwendet.Hierfür wurde die Behandlung vonNicht-Methan-Kohlenwasserstoffen (NMKW) hinzugefügt,was auch die Entwicklung einer vereinfachten Beschreibungihrer Chemie, sowie die Erfassung von Depositionsprozessenund Emissionen erforderte. Zur Lösung der steifengewöhnlichen Differentialgleichungen der Chemie wurdeeine schnelles Rosenbrock-Verfahren eingesetzt, das soimplementiert wurde, dass die Modell-Chemie fürzukünftige Studien leicht abgeändert werden kann. Zur Evaluierung des Modells wurde ein umfangreicherVergleich der Modellergebnisse mit Bodenmessungen, sowieFlugzeug-, Sonden- und Satelliten-Daten durchgeführt.Das Modell kann viele Aspekte der Beobachtungen derverschieden Substanzen realistisch wiedergeben. Es wurdenjedoch auch einige Diskrepanzen festgestellt, die Hinweiseauf fehlerhafte Emissionsfelder oder auf die Modell-Dynamikund auch auf fehlende Modell-Chemie liefern. Zur weiteren Untersuchung des Einflusses verschiedenerStoffgruppen wurden drei Läufe mit unterschiedlichkomplexer Chemie analysiert. Durch das Berücksichtigender NMKW wird die Verteilung mehrerer wichtiger Substanzensignifikant beeinflusst, darunter z.B. ein Anstieg desglobalen Ozons. Es wurde gezeigt, dass die biogene SubstanzIsopren etwa die Hälfte des Gesamteffekts der NMKWausmachte (mehr in den Tropen, weniger anderswo). In einer Sensitivitätsstudie wurden die Unsicherheitenbei der Modellierung von Isopren weitergehend untersucht.Dabei konnte gezeigt werden, dass die Unsicherheit beiphysikalischen Aspekten (Deposition und heterogene Prozesse)ebenso groß sein kann, wie die aus dem chemischenGasphasen-Mechanismus stammende, welche zu globalbedeutsamen Abweichungen führte. Lokal können sichnoch größere Abweichungen ergeben. Zusammenfassend kann gesagt werden, dass die numerischenStudien dieser Arbeit neue Einblicke in wichtige Aspekte derPhotochemieder Troposphäre ergaben und in Vorschläge fürweiter Studien mündeten, die die wichtigsten gefundenenUnsicherheiten weiter verringern könnten.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The carbonate outcrops of the anticline of Monte Conero (Italy) were studied in order to characterize the geometry of the fractures and to establish their influence on the petrophysical properties (hydraulic conductivity) and on the vulnerability to pollution. The outcrops form an analog for a fractured aquifer and belong to the Maiolica Fm. and the Scaglia Rossa Fm. The geometrical properties of fractures such as orientation, length, spacing and aperture were collected and statistically analyzed. Five types of mechanical fractures were observed: veins, joints, stylolites, breccias and faults. The types of fractures are arranged in different sets and geometric assemblages which form fracture networks. In addition, the fractures were analyzed at the microscale using thin sections. The fracture age-relationships resulted similar to those observed at the outcrop scale, indicating that at least three geological episodes have occurred in Monte Conero. A conceptual model for fault development was based on the observations of veins and stylolites. The fracture sets were modelled by the code FracSim3D to generate fracture network models. The permeability of a breccia zone was estimated at microscale by and point counting and binary image methods, whereas at the outcrop scale with Oda’s method. Microstructure analysis revealed that only faults and breccias are potential pathways for fluid flow since all veins observed are filled with calcite. According this, three scenarios were designed to asses the vulnerability to pollution of the analogue aquifer: the first scenario considers the Monte Conero without fractures, second scenario with all observed systematic fractures and the third scenario with open veins, joints and faults/breccias. The fractures influence the carbonate aquifer by increasing its porosity and hydraulic conductivity. The vulnerability to pollution depends also on the presence of karst zones, detric zones and the material of the vadose zone.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of guided ultrasonic waves (GUW) has increased considerably in the fields of non-destructive (NDE) testing and structural health monitoring (SHM) due to their ability to perform long range inspections, to probe hidden areas as well as to provide a complete monitoring of the entire waveguide. Guided waves can be fully exploited only once their dispersive properties are known for the given waveguide. In this context, well stated analytical and numerical methods are represented by the Matrix family methods and the Semi Analytical Finite Element (SAFE) methods. However, while the former are limited to simple geometries of finite or infinite extent, the latter can model arbitrary cross-section waveguides of finite domain only. This thesis is aimed at developing three different numerical methods for modelling wave propagation in complex translational invariant systems. First, a classical SAFE formulation for viscoelastic waveguides is extended to account for a three dimensional translational invariant static prestress state. The effect of prestress, residual stress and applied loads on the dispersion properties of the guided waves is shown. Next, a two-and-a-half Boundary Element Method (2.5D BEM) for the dispersion analysis of damped guided waves in waveguides and cavities of arbitrary cross-section is proposed. The attenuation dispersive spectrum due to material damping and geometrical spreading of cavities with arbitrary shape is shown for the first time. Finally, a coupled SAFE-2.5D BEM framework is developed to study the dispersion characteristics of waves in viscoelastic waveguides of arbitrary geometry embedded in infinite solid or liquid media. Dispersion of leaky and non-leaky guided waves in terms of speed and attenuation, as well as the radiated wavefields, can be computed. The results obtained in this thesis can be helpful for the design of both actuation and sensing systems in practical application, as well as to tune experimental setup.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The need to effectively manage the documentation covering the entire production process, from the concept phase right through to market realise, constitutes a key issue in the creation of a successful and highly competitive product. For almost forty years the most commonly used strategies to achieve this have followed Product Lifecycle Management (PLM) guidelines. Translated into information management systems at the end of the '90s, this methodology is now widely used by companies operating all over the world in many different sectors. PLM systems and editor programs are the two principal types of software applications used by companies for their process aotomation. Editor programs allow to store in documents the information related to the production chain, while the PLM system stores and shares this information so that it can be used within the company and made it available to partners. Different software tools, which capture and store documents and information automatically in the PLM system, have been developed in recent years. One of them is the ''DirectPLM'' application, which has been developed by the Italian company ''Focus PLM''. It is designed to ensure interoperability between many editors and the Aras Innovator PLM system. In this dissertation we present ''DirectPLM2'', a new version of the previous software application DirectPLM. It has been designed and developed as prototype during the internship by Focus PLM. Its new implementation separates the abstract logic of business from the real commands implementation, previously strongly dependent on Aras Innovator. Thanks to its new design, Focus PLM can easily develop different versions of DirectPLM2, each one devised for a specific PLM system. In fact, the company can focus the development effort only on a specific set of software components which provides specialized functions interacting with that particular PLM system. This allows shorter Time-To-Market and gives the company a significant competitive advantage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Da quando è iniziata l'era del Cloud Computing molte cose sono cambiate, ora è possibile ottenere un server in tempo reale e usare strumenti automatizzati per installarvi applicazioni. In questa tesi verrà descritto lo strumento MODDE (Model-Driven Deployment Engine), usato per il deployment automatico, partendo dal linguaggio ABS. ABS è un linguaggio a oggetti che permette di descrivere le classi in una maniera astratta. Ogni componente dichiarato in questo linguaggio ha dei valori e delle dipendenze. Poi si procede alla descrizione del linguaggio di specifica DDLang, col quale vengono espressi tutti i vincoli e le configurazioni finali. In seguito viene spiegata l’architettura di MODDE. Esso usa degli script che integrano i tool Zephyrus e Metis e crea un main ABS dai tre file passati in input, che serve per effettuare l’allocazione delle macchine in un Cloud. Inoltre verranno introdotti i due sotto-strumenti usati da MODDE: Zephyrus e Metis. Il primo si occupa di scegliere quali servizi installare tenendo conto di tutte le loro dipendenze, cercando di ottimizzare il risultato. Il secondo gestisce l’ordine con cui installarli tenendo conto dei loro stati interni e delle dipendenze. Con la collaborazione di questi componenti si ottiene una installazione automatica piuttosto efficace. Infine dopo aver spiegato il funzionamento di MODDE viene spiegato come integrarlo in un servizio web per renderlo disponibile agli utenti. Esso viene installato su un server HTTP Apache all’interno di un container di Docker.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In biostatistical applications interest often focuses on the estimation of the distribution of a time-until-event variable T. If one observes whether or not T exceeds an observed monitoring time at a random number of monitoring times, then the data structure is called interval censored data. We extend this data structure by allowing the presence of a possibly time-dependent covariate process that is observed until end of follow up. If one only assumes that the censoring mechanism satisfies coarsening at random, then, by the curve of dimensionality, typically no regular estimators will exist. To fight the curse of dimensionality we follow the approach of Robins and Rotnitzky (1992) by modeling parameters of the censoring mechanism. We model the right-censoring mechanism by modeling the hazard of the follow up time, conditional on T and the covariate process. For the monitoring mechanism we avoid modeling the joint distribution of the monitoring times by only modeling a univariate hazard of the pooled monitoring times, conditional on the follow up time, T, and the covariates process, which can be estimated by treating the pooled sample of monitoring times as i.i.d. In particular, it is assumed that the monitoring times and the right-censoring times only depend on T through the observed covariate process. We introduce inverse probability of censoring weighted (IPCW) estimator of the distribution of T and of smooth functionals thereof which are guaranteed to be consistent and asymptotically normal if we have available correctly specified semiparametric models for the two hazards of the censoring process. Furthermore, given such correctly specified models for these hazards of the censoring process, we propose a one-step estimator which will improve on the IPCW estimator if we correctly specify a lower-dimensional working model for the conditional distribution of T, given the covariate process, that remains consistent and asymptotically normal if this latter working model is misspecified. It is shown that the one-step estimator is efficient if each subject is at most monitored once and the working model contains the truth. In general, it is shown that the one-step estimator optimally uses the surrogate information if the working model contains the truth. It is not optimal in using the interval information provided by the current status indicators at the monitoring times, but simulations in Peterson, van der Laan (1997) show that the efficiency loss is small.