901 resultados para Formal Methods. Component-Based Development. Competition. Model Checking


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis studies survival analysis techniques dealing with censoring to produce predictive tools that predict the risk of endovascular aortic aneurysm repair (EVAR) re-intervention. Censoring indicates that some patients do not continue follow up, so their outcome class is unknown. Methods dealing with censoring have drawbacks and cannot handle the high censoring of the two EVAR datasets collected. Therefore, this thesis presents a new solution to high censoring by modifying an approach that was incapable of differentiating between risks groups of aortic complications. Feature selection (FS) becomes complicated with censoring. Most survival FS methods depends on Cox's model, however machine learning classifiers (MLC) are preferred. Few methods adopted MLC to perform survival FS, but they cannot be used with high censoring. This thesis proposes two FS methods which use MLC to evaluate features. The two FS methods use the new solution to deal with censoring. They combine factor analysis with greedy stepwise FS search which allows eliminated features to enter the FS process. The first FS method searches for the best neural networks' configuration and subset of features. The second approach combines support vector machines, neural networks, and K nearest neighbor classifiers using simple and weighted majority voting to construct a multiple classifier system (MCS) for improving the performance of individual classifiers. It presents a new hybrid FS process by using MCS as a wrapper method and merging it with the iterated feature ranking filter method to further reduce the features. The proposed techniques outperformed FS methods based on Cox's model such as; Akaike and Bayesian information criteria, and least absolute shrinkage and selector operator in the log-rank test's p-values, sensitivity, and concordance. This proves that the proposed techniques are more powerful in correctly predicting the risk of re-intervention. Consequently, they enable doctors to set patients’ appropriate future observation plan.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A tanulmányban 25 ország, kétezres évek közepi állapotot tükröző, reprezentatív keresztmetszeti mintáin egyrészt a Duncan-Hoffman-féle modellre támaszkodva megvizsgáljuk, hogy adatbázisunk milyen mértékben tükrözi az illeszkedés bérhozamával foglalkozó irodalom legfontosabb empirikus következtetéseit, másrészt - a Hartog- Oosterbeek-szerzőpáros által javasolt statisztikai próbák segítségével - azt elemezzük, hogy a becslések eredményei alapján mit mondhatunk Mincer emberitőke-, valamint Thurow állásversenymodelljének érvényességéről. Heckman szelekciós torzítást kiküszöbölő becslőfüggvényén alapuló eredményeink jórészt megerősítik az irodalomban vázolt legfontosabb empirikus sajátosságokat, ugyanakkor a statisztikai próbák az országok többségére nézve cáfolják mind az emberi tőke, mind az állásverseny modelljének empirikus érvényességét. / === / Using the Duncan–Hoffman model, the paper estimates returns for educational mismatch using comparable micro data for 25 European countries. The aim is to gauge the extent to which the main empirical regularities shown in other papers on the subject are confirmed by this data base. Based on tests proposed by Hartog and Oosterbeek, the author also considers whether the observed empirical patterns accord with the Mincerian basic human-capital model and Thurow's job-competition model. Heckman's sample-selection estimator shows the returns to be fairly consistent with those found in the literature; the job-competition model and the Mincerian human-capital model can be rejected for most countries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, the development of domain-specific communication applications is both time-consuming and error-prone because the low-level communication services provided by the existing systems and networks are primitive and often heterogeneous. Multimedia communication applications are typically built on top of low-level network abstractions such as TCP/UDP socket, SIP (Session Initiation Protocol) and RTP (Real-time Transport Protocol) APIs. The User-centric Communication Middleware (UCM) is proposed to encapsulate the networking complexity and heterogeneity of basic multimedia and multi-party communication for upper-layer communication applications. And UCM provides a unified user-centric communication service to diverse communication applications ranging from a simple phone call and video conferencing to specialized communication applications like disaster management and telemedicine. It makes it easier to the development of domain-specific communication applications. The UCM abstraction and API is proposed to achieve these goals. The dissertation also tries to integrate the formal method into UCM development process. The formal model is created for UCM using SAM methodology. Some design errors are found during model creation because the formal method forces to give the precise description of UCM. By using the SAM tool, formal UCM model is translated to Promela formula model. In the dissertation, some system properties are defined as temporal logic formulas. These temporal logic formulas are manually translated to promela formulas which are individually integrated with promela formula model of UCM and verified using SPIN tool. Formal analysis used here helps verify the system properties (for example multiparty multimedia protocol) and dig out the bugs of systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the Morris worm was released in 1988, Internet worms continue to be one of top security threats. For example, the Conficker worm infected 9 to 15 million machines in early 2009 and shut down the service of some critical government and medical networks. Moreover, it constructed a massive peer-to-peer (P2P) botnet. Botnets are zombie networks controlled by attackers setting out coordinated attacks. In recent years, botnets have become the number one threat to the Internet. The objective of this research is to characterize spatial-temporal infection structures of Internet worms, and apply the observations to study P2P-based botnets formed by worm infection. First, we infer temporal characteristics of the Internet worm infection structure, i.e., the host infection time and the worm infection sequence, and thus pinpoint patient zero or initially infected hosts. Specifically, we apply statistical estimation techniques on Darknet observations. We show analytically and empirically that our proposed estimators can significantly improve the inference accuracy. Second, we reveal two key spatial characteristics of the Internet worm infection structure, i.e., the number of children and the generation of the underlying tree topology formed by worm infection. Specifically, we apply probabilistic modeling methods and a sequential growth model. We show analytically and empirically that the number of children has asymptotically a geometric distribution with parameter 0.5, and the generation follows closely a Poisson distribution. Finally, we evaluate bot detection strategies and effects of user defenses in P2P-based botnets formed by worm infection. Specifically, we apply the observations of the number of children and demonstrate analytically and empirically that targeted detection that focuses on the nodes with the largest number of children is an efficient way to expose bots. However, we also point out that future botnets may self-stop scanning to weaken targeted detection, without greatly slowing down the speed of worm infection. We then extend the worm spatial infection structure and show empirically that user defenses, e.g. , patching or cleaning, can significantly mitigate the robustness and the effectiveness of P2P-based botnets. To counterattack, we evaluate a simple measure by future botnets that enhances topology robustness through worm re-infection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: (1) error rate on testing set, (2) processing time needed to recognize a segmented character and (3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traffic from major hurricane evacuations is known to cause severe gridlocks on evacuation routes. Better prediction of the expected amount of evacuation traffic is needed to improve the decision-making process for the required evacuation routes and possible deployment of special traffic operations, such as contraflow. The objective of this dissertation is to develop prediction models to predict the number of daily trips and the evacuation distance during a hurricane evacuation. ^ Two data sets from the surveys of the evacuees from Hurricanes Katrina and Ivan were used in the models' development. The data sets included detailed information on the evacuees, including their evacuation days, evacuation distance, distance to the hurricane location, and their associated socioeconomic characteristics, including gender, age, race, household size, rental status, income, and education level. ^ Three prediction models were developed. The evacuation trip and rate models were developed using logistic regression. Together, they were used to predict the number of daily trips generated before hurricane landfall. These daily predictions allowed for more detailed planning over the traditional models, which predicted the total number of trips generated from an entire evacuation. A third model developed attempted to predict the evacuation distance using Geographically Weighted Regression (GWR), which was able to account for the spatial variations found among the different evacuation areas, in terms of impacts from the model predictors. All three models were developed using the survey data set from Hurricane Katrina and then evaluated using the survey data set from Hurricane Ivan. ^ All of the models developed provided logical results. The logistic models showed that larger households with people under age six were more likely to evacuate than smaller households. The GWR-based evacuation distance model showed that the household with children under age six, income, and proximity of household to hurricane path, all had an impact on the evacuation distances. While the models were found to provide logical results, it was recognized that they were calibrated and evaluated with relatively limited survey data. The models can be refined with additional data from future hurricane surveys, including additional variables, such as the time of day of the evacuation. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The accurate and reliable estimation of travel time based on point detector data is needed to support Intelligent Transportation System (ITS) applications. It has been found that the quality of travel time estimation is a function of the method used in the estimation and varies for different traffic conditions. In this study, two hybrid on-line travel time estimation models, and their corresponding off-line methods, were developed to achieve better estimation performance under various traffic conditions, including recurrent congestion and incidents. The first model combines the Mid-Point method, which is a speed-based method, with a traffic flow-based method. The second model integrates two speed-based methods: the Mid-Point method and the Minimum Speed method. In both models, the switch between travel time estimation methods is based on the congestion level and queue status automatically identified by clustering analysis. During incident conditions with rapidly changing queue lengths, shock wave analysis-based refinements are applied for on-line estimation to capture the fast queue propagation and recovery. Travel time estimates obtained from existing speed-based methods, traffic flow-based methods, and the models developed were tested using both simulation and real-world data. The results indicate that all tested methods performed at an acceptable level during periods of low congestion. However, their performances vary with an increase in congestion. Comparisons with other estimation methods also show that the developed hybrid models perform well in all cases. Further comparisons between the on-line and off-line travel time estimation methods reveal that off-line methods perform significantly better only during fast-changing congested conditions, such as during incidents. The impacts of major influential factors on the performance of travel time estimation, including data preprocessing procedures, detector errors, detector spacing, frequency of travel time updates to traveler information devices, travel time link length, and posted travel time range, were investigated in this study. The results show that these factors have more significant impacts on the estimation accuracy and reliability under congested conditions than during uncongested conditions. For the incident conditions, the estimation quality improves with the use of a short rolling period for data smoothing, more accurate detector data, and frequent travel time updates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The National System for the Integral Development of the Family (DIF) in Mexico assists children in orphanages. This paper provides an overview of its current practices, and advocates a holistic educational/social model for “alternative orphanages,” integrating Maslow’s Hierarchy of Needs and the rights-based approach. The model complements DIF’s social efforts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The dissertation takes a multivariate approach to answer the question of how applicant age, after controlling for other variables, affects employment success in a public organization. In addition to applicant age, there are five other categories of variables examined: organization/applicant variables describing the relationship of the applicant to the organization; organization/position variables describing the target position as it relates to the organization; episodic variables such as applicant age relative to the ages of competing applicants; economic variables relating to the salary needs of older applicants; and cognitive variables that may affect the decision maker's evaluation of the applicant. ^ An exploratory phase of research employs archival data from approximately 500 decisions made in the past three years to hire or promote applicants for positions in one public health administration organization. A logit regression model is employed to examine the probability that the variables modify the effect of applicant age on employment success. A confirmatory phase of the dissertation is a controlled experiment in which hiring decision makers from the same public organization perform a simulated hiring decision exercise to evaluate hypothetical applicants of similar qualifications but of different ages. The responses of the decision makers to a series of bipolar adjective scales add support to the cognitive component of the theoretical model of the hiring decision. A final section contains information gathered from interviews with key informants. ^ Applicant age has tended to have a curvilinear relationship with employment success. For some positions, the mean age of the applicants most likely to succeed varies with the values of the five groups of moderating variables. The research contributes not only to the practice of public personnel administration, but is useful in examining larger public policy issues associated with an aging workforce. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The presences of heavy metals, organic contaminants and natural toxins in natural water bodies pose a serious threat to the environment and the health of living organisms. Therefore, there is a critical need to identify sustainable and environmentally friendly water treatment processes. In this dissertation, I focus on the fundamental studies of advanced oxidation processes and magnetic nano-materials as promising new technologies for water treatments. Advanced oxidation processes employ reactive oxygen species (ROS) which can lead to the mineralization of a number of pollutants and toxins. The rates of formation, steady-state concentrations, and kinetic parameters of hydroxyl radical and singlet oxygen produced by various TiO2 photocatalysts under UV or visible irradiations were measured using selective chemical probes. Hydroxyl radical is the dominant ROS, and its generation is dependent on experimental conditions. The optimal condition for generation of hydroxyl radical by of TiO2 coated glass microspheres is studied by response surface methodology, and the optimal conditions are applied for the degradation of dimethyl phthalate. Singlet oxygen (1O2) also plays an important role for advanced processes, so the degradation of microcystin-LR by rose bengal, an 1O2 sensitizer was studied. The measured bimolecular reaction rate constant between MC-LR and 1O2 is ∼ 106 M-1 s-1 based on competition kinetics with furfuryl alcohol. The typical adsorbent needs separation after the treatment, while magnetic iron oxides can be easily removed by a magnetic field. Maghemite and humic acid coated magnetite (HA-Fe3O4) were synthesized, characterized and applied for chromium(VI) removal. The adsorption of chromium(VI) by maghemite and HA-Fe3O4 follow a pseudo-second-order kinetic process. The adsorption of chromium(VI) by maghemite is accurately modeled using adsorption isotherms, and solution pH and presence of humic acid influence adsorption. Humic acid coated magnetite can adsorb and reduce chromium(VI) to non-toxic chromium (III), and the reaction is not highly dependent on solution pH. The functional groups associated with humic acid act as ligands lead to the Cr(III) complex via a coupled reduction-complexation mechanism. Extended X-ray absorption fine structure spectroscopy demonstrates the Cr(III) in the Cr-loaded HA-Fe 3O4 materials has six neighboring oxygen atoms in an octahedral geometry with average bond lengths of 1.98 Å.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Component-based Software Engineering (CBSE) and Service-Oriented Architecture (SOA) became popular ways to develop software over the last years. During the life-cycle of a software system, several components and services can be developed, evolved and replaced. In production environments, the replacement of core components, such as databases, is often a risky and delicate operation, where several factors and stakeholders should be considered. Service Level Agreement (SLA), according to ITILv3’s official glossary, is “an agreement between an IT service provider and a customer. The agreement consists on a set of measurable constraints that a service provider must guarantee to its customers.”. In practical terms, SLA is a document that a service provider delivers to its consumers with minimum quality of service (QoS) metrics.This work is intended to assesses and improve the use of SLAs to guide the transitioning process of databases on production environments. In particular, in this work we propose SLA-Based Guidelines/Process to support migrations from a relational database management system (RDBMS) to a NoSQL one. Our study is validated by case studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We argue that considering transitions at the same level as states, as first-class citizens, is advantageous in many cases. Namely, the use of atomic propositions on transitions, as well as on states, allows temporal formulas and strategies to be more powerful, general, and meaningful. We define egalitarian structures and logics, and show how they generalize well-known state-based, event-based, and mixed ones. We present translations from egalitarian to non-egalitarian settings that, in particular, allow the model checking of LTLR formulas using Maude’s LTL model checker. We have implemented these translations as a prototype in Maude itself.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La présente thèse vise à évaluer le degré d’implantation et d’utilisation de systèmes de mesure de la performance (SMP) par les décideurs des organisations de réadaptation et à comprendre les facteurs contextuels ayant influencé leur implantation. Pour ce faire, une étude de cas multiples a été réalisée comprenant deux sources de données: des entrevues individuelles avec des cadres supérieurs des organisations de réadaptation du Québec et des documents organisationnels. Le cadre conceptuel Consolidated Framework for Implementation Research a été utilisé pour guider la collecte et l’analyse des données. Une analyse intra-cas ainsi qu’une analyse inter-cas ont été réalisées. Nos résultats montrent que le niveau de préparation organisationnelle à l’implantation d’un SMP était élevé et que les SMP ont été implantés avec succès et utilisés de plusieurs façons. Les organisations les ont utilisés de façon passive (comme outil d’information), de façon ciblée (pour tenter d’améliorer des domaines sous-performants) et de façon politique (comme outil de négociation auprès des autorités gouvernementales). Cette utilisation diversifiée des SMP est suscitée par l’interaction complexe de facteurs provenant du contexte interne propre à chaque organisation, des caractéristiques du SMP, du processus d’implantation appliqué et du contexte externe dans lequel évoluent ces organisations. Au niveau du contexte interne, l’engagement continu et le leadership de la haute direction ont été décisifs dans l’implantation du SMP de par leur influence sur l’identification du besoin d’un SMP, l’engagement des utilisateurs visés dans le projet, la priorité organisationnelle accordée au SMP ainsi que les ressources octroyées à son implantation, la qualité des communications et le climat d’apprentissage organisationnel. Toutefois, même si certains de ces facteurs, comme les ressources octroyées à l’implantation, la priorité organisationnelle du SMP et le climat d’apprentissage se sont révélés être des barrières à l’implantation, ultimement, ces barrières n’étaient pas suffisamment importantes pour entraver l’utilisation du SMP. Cette étude a également confirmé l’importance des caractéristiques du SMP, particulièrement la perception de qualité et d’utilité de l’information. Cependant, à elles seules, ces caractéristiques sont insuffisantes pour assurer le succès d’implantation. Cette analyse d’implantation a également révélé que, même si le processus d’implantation ne suit pas des étapes formelles, un plan de développement du SMP, la participation et l’engagement des décideurs ainsi que la désignation d’un responsable de projet ont tous facilité son implantation. Cependant, l’absence d’évaluation et de réflexion collective sur le processus d’implantation a limité le potentiel d’apprentissage organisationnel, un prérequis à l’amélioration de la performance. Quant au contexte externe, le soutien d’un organisme externe s’est avéré un facilitateur indispensable pour favoriser l’implantation de SMP par les organisations de réadaptation malgré l’absence de politiques et incitatifs gouvernementaux à cet effet. Cette étude contribue à accroître les connaissances sur les facteurs contextuels ainsi que sur leurs interactions dans l’utilisation d’innovations tels les SMP et confirme l’importance d’aborder l’analyse de l’implantation avec une perspective systémique.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Organophosphate (OP) pesticides are well-known developmental neurotoxicants that have been linked to abnormal cognitive and behavioral endpoints through both epidemiological studies and animal models of behavioral teratology, and are implicated in the dysfunction of multiple neurotransmitters, including dopamine. Chemical similarities between OP pesticides and organophosphate flame retardants (OPFRs), a class of compounds growing in use and environmental relevance, have produced concern regarding whether developmental exposures to OPFRs and OP pesticides may share behavioral outcomes, impacts on dopaminergic systems, or both. Methods: Using the zebrafish animal model, we exposed developing fish to two OPFRs, TDCIPP and TPHP, as well as the OP pesticide chlorpyrifos, during the first 5 days following fertilization. From there, the exposed fish were assayed for behavioral abnormalities and effects on monoamine neurochemistry as both larvae and adults. An experiment conducted in parallel examined how antagonism of the dopamine system during an identical window of development could alter later life behavior in the same assays. Finally, we investigated the interaction between developmental exposure to an OPFR and acute dopamine antagonism in larval behavior. Results: Developmental exposure to all three OP compounds altered zebrafish behavior, with effects persisting into adulthood. Additionally, exposure to an OPFR decreased the behavioral response to acute D2 receptor antagonism in larvae. However, the pattern of behavioral effects diverged substantially from those seen following developmental dopamine antagonism, and the investigations into dopamine neurochemistry were too variable to be conclusive. Thus, although the results support the hypothesis that OPFRs, as with OP pesticides such as chlorpyrifos, may present a risk to normal behavioral development, we were unable to directly link these effects to any dopaminergic dysfunction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: It is well documented that children with Specific Language Impairment (SLI) experience significant grammatical deficits. While much of the focus in the past has been on their morphosyntactic difficulties, less is known about their acquisition of complex syntactic structures such as relative clauses. The role of memory in language performance has also become increasingly prominent in the literature. Aims: This study aims to investigate the control of an important complex syntactic structure, the relative clause, by school age children with SLI in Ireland, using a newly devised sentence recall task. It also aims to explore the role of verbal and short-termworking memory in the performance of children with SLI on the sentence recall task, using a standardized battery of tests based on Baddeley’s model of working memory. Methods and Procedures: Thirty two children with SLI, thirty two age matched typically developing children (AM-TD) between the ages of 6 and 7,11 years and twenty younger typically developing (YTD) children between 4,7 and 5 years, completed the task. The sentence recall (SR) task included 52 complex sentences and 17 fillers. It included relative clauses that are used in natural discourse and that reflect a developmental hierarchy. The relative clauses were also controlled for length and varied in syntactic complexity, representing the full range of syntactic roles. There were seven different relative clause types attached to either the predicate nominal of a copular clause (Pn), or to the direct object of a transitive clause (Do). Responses were recorded, transcribed and entered into a database for analysis. TheWorkingMemory Test Battery for children (WMTB-C—Pickering & Gathercole, 2001) was administered in order to explore the role of short-term memory and working memory on the children’s performance on the SR task. Outcomes and Results: The children with SLI showed significantly greater difficulty than the AM-TD group and the YTD group. With the exception of the genitive subject clauses, the children with SLI scored significantly higher on all sentences containing a Pn main clause than those containing a transitive main clause. Analysis of error types revealed the frequent production of a different type of relative clause than that presented in the task—with a strong word order preference in the NVN direction indicated for the children with SLI. The SR performance for the children with SLI was most highly correlated with expressive language skills and digit recall. Conclusions and Implications: Children with SLI have significantly greater difficulty with relative clauses than YTD children who are on average two years younger—relative clauses are a delay within a delay. Unlike the YTD children they show a tendency to simplify relative clauses in the noun verb noun (NVN) direction. They show a developmental hierarchy in their production of relative clause constructions and are highly influenced by the frequency distribution of the relative clauses in the ambient language.