926 resultados para Unicode Common Locale Data Repository


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nigerian scam, also known as advance fee fraud or 419 scam, is a prevalent form of online fraudulent activity that causes financial loss to individuals and businesses. Nigerian scam has evolved from simple non-targeted email messages to more sophisticated scams targeted at users of classifieds, dating and other websites. Even though such scams are observed and reported by users frequently, the community’s understanding of Nigerian scams is limited since the scammers operate “underground”. To better understand the underground Nigerian scam ecosystem and seek effective methods to deter Nigerian scam and cybercrime in general, we conduct a series of active and passive measurement studies. Relying upon the analysis and insight gained from the measurement studies, we make four contributions: (1) we analyze the taxonomy of Nigerian scam and derive long-term trends in scams; (2) we provide an insight on Nigerian scam and cybercrime ecosystems and their underground operation; (3) we propose a payment intervention as a potential deterrent to cybercrime operation in general and evaluate its effectiveness; and (4) we offer active and passive measurement tools and techniques that enable in-depth analysis of cybercrime ecosystems and deterrence on them. We first created and analyze a repository of more than two hundred thousand user-reported scam emails, stretching from 2006 to 2014, from four major scam reporting websites. We select ten most commonly observed scam categories and tag 2,000 scam emails randomly selected from our repository. Based upon the manually tagged dataset, we train a machine learning classifier and cluster all scam emails in the repository. From the clustering result, we find a strong and sustained upward trend for targeted scams and downward trend for non-targeted scams. We then focus on two types of targeted scams: sales scams and rental scams targeted users on Craigslist. We built an automated scam data collection system and gathered large-scale sales scam emails. Using the system we posted honeypot ads on Craigslist and conversed automatically with the scammers. Through the email conversation, the system obtained additional confirmation of likely scam activities and collected additional information such as IP addresses and shipping addresses. Our analysis revealed that around 10 groups were responsible for nearly half of the over 13,000 total scam attempts we received. These groups used IP addresses and shipping addresses in both Nigeria and the U.S. We also crawled rental ads on Craigslist, identified rental scam ads amongst the large number of benign ads and conversed with the potential scammers. Through in-depth analysis of the rental scams, we found seven major scam campaigns employing various operations and monetization methods. We also found that unlike sales scammers, most rental scammers were in the U.S. The large-scale scam data and in-depth analysis provide useful insights on how to design effective deterrence techniques against cybercrime in general. We study underground DDoS-for-hire services, also known as booters, and measure the effectiveness of undermining a payment system of DDoS Services. Our analysis shows that the payment intervention can have the desired effect of limiting cybercriminals’ ability and increasing the risk of accepting payments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation presents work done in the design, modeling, and fabrication of magnetically actuated microrobot legs. Novel fabrication processes for manufacturing multi-material compliant mechanisms have been used to fabricate effective legged robots at both the meso and micro scales, where the meso scale refers to the transition between macro and micro scales. This work discusses the development of a novel mesoscale manufacturing process, Laser Cut Elastomer Refill (LaCER), for prototyping millimeter-scale multi-material compliant mechanisms with elastomer hinges. Additionally discussed is an extension of previous work on the development of a microscale manufacturing process for fabricating micrometer-sale multi-material compliant mechanisms with elastomer hinges, with the added contribution of a method for incorporating magnetic materials for mechanism actuation using externally applied fields. As both of the fabrication processes outlined make significant use of highly compliant elastomer hinges, a fast, accurate modeling method for these hinges was desired for mechanism characterization and design. An analytical model was developed for this purpose, making use of the pseudo rigid-body (PRB) model and extending its utility to hinges with significant stretch component, such as those fabricated from elastomer materials. This model includes 3 springs with stiffnesses relating to material stiffness and hinge geometry, with additional correction factors for aspects particular to common multi-material hinge geometry. This model has been verified against a finite element analysis model (FEA), which in turn was matched to experimental data on mesoscale hinges manufactured using LaCER. These modeling methods have additionally been verified against experimental data from microscale hinges manufactured using the Si/elastomer/magnetics MEMS process. The development of several mechanisms is also discussed: including a mesoscale LaCER-fabricated hexapedal millirobot capable of walking at 2.4 body lengths per second; prototyped mesoscale LaCER-fabricated underactuated legs with asymmetrical features for improved performance; 1 centimeter cubed LaCER-fabricated magnetically-actuated hexapods which use the best-performing underactuated leg design to locomote at up to 10.6 body lengths per second; five microfabricated magnetically actuated single-hinge mechanisms; a 14-hinge, 11-link microfabricated gripper mechanism; a microfabricated robot leg mechansim demonstrated clearing a step height of 100 micrometers; and a 4 mm x 4 mm x 5 mm, 25 mg microfabricated magnetically-actuated hexapod, demonstrated walking at up to 2.25 body lengths per second.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Common building energy modeling approaches do not account for the influence of surrounding neighborhood on the energy consumption patterns. This thesis develops a framework to quantify the neighborhood impact on a building energy consumption based on the local wind flow. The airflow in the neighborhood is predicted using Computational Fluid Dynamics (CFD) in eight principal wind directions. The developed framework in this study benefits from wind multipliers to adjust the wind velocity encountering the target building. The input weather data transfers the adjusted wind velocities to the building energy model. In a case study, the CFD method is validated by comparing with on-site temperature measurements, and the building energy model is calibrated using utilities data. A comparison between using the adjusted and original weather data shows that the building energy consumption and air system heat gain decreased by 5% and 37%, respectively, while the cooling gain increased by 4% annually.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

JULIET is a service provided by SHERPA. Its mission is to provide a brief summary of each funding agency’s policy on self-archiving of the published research they have funded. Each entry covers the requirements and details: -Whether archiving is mandatory or encouraged, -What should be deposited -Within what time frame this deposit should take place -Where articles should be deposited -Any conditions attached to this deposit. JULIET interacts with other services such as RoMEO, which listed publisher policies on self-archiving. JULIET is being developed to include funding agency’s policies on open access to data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Understanding transcriptional regulation by genome-wide microarray studies can contribute to unravel complex relationships between genes. Attempts to standardize the annotation of microarray data include the Minimum Information About a Microarray Experiment (MIAME) recommendations, the MAGE-ML format for data interchange, and the use of controlled vocabularies or ontologies. The existing software systems for microarray data analysis implement the mentioned standards only partially and are often hard to use and extend. Integration of genomic annotation data and other sources of external knowledge using open standards is therefore a key requirement for future integrated analysis systems. Results: The EMMA 2 software has been designed to resolve shortcomings with respect to full MAGE-ML and ontology support and makes use of modern data integration techniques. We present a software system that features comprehensive data analysis functions for spotted arrays, and for the most common synthesized oligo arrays such as Agilent, Affymetrix and NimbleGen. The system is based on the full MAGE object model. Analysis functionality is based on R and Bioconductor packages and can make use of a compute cluster for distributed services. Conclusion: Our model-driven approach for automatically implementing a full MAGE object model provides high flexibility and compatibility. Data integration via SOAP-based web-services is advantageous in a distributed client-server environment as the collaborative analysis of microarray data is gaining more and more relevance in international research consortia. The adequacy of the EMMA 2 software design and implementation has been proven by its application in many distributed functional genomics projects. Its scalability makes the current architecture suited for extensions towards future transcriptomics methods based on high-throughput sequencing approaches which have much higher computational requirements than microarrays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dataset for publication in PLOS One

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Extreme lipid values predisposing on illnesses are dyslipidemias. Dyslipidemias evolve in early childhood, but their significance or persistency is not well known. Common dyslipidemias may aggregate in the same families. This thesis is a part of the longitudinal randomized Special Turku coronary Risk factor Intervention Project STRIP, in which 1054 families with six months old children were randomized to a control or to an intervention group. The family lipid data from the first 11 years was used. Fasting samples at the age of five years defined the lipid phenotypes. The dyslipidemias coexisting in the parent and the child were studied. At the age of 11 years 402 children participated artery ultrasound studies. The significance of the childhood dyslipidemias and lipoprotein(a) concentration on endothelial function was evaluated with the flow mediated arterial dilatation test. Frequently elevated non-HDL cholesterol concentration from one to seven-year-old children associated to similar parental dyslipidemia that improved the predictive value of the childhood sample. The familial combinations were hypercholesterolemia (2.3%), hypertriglyceridemia (2.0%), familial combined hyperlipidemia (1.8%), and isolated low HDL-cholesterol concentration (1.4%). Combined hyperlipidemia in a parent predicted most frequently the child’s hyperlipidemia. High lipoprotein(a) concentration aggregated in some families and associated to childhood attenuated brachial artery dilatation. Hypercholesterolemia and high lipoprotein(a) concentration at five years of age predicted attenuated dilatation. This study demonstrated that parental dyslipidemias and high lipoprotein(a) concentration help to find early childhood dyslipidemias. The association of hypercholesterolemia and lipoprotein(a) concentration with endothelial function emphasizes the importance of the early recognition of the dyslipidemias.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Close similarities have been found between the otoliths of sea-caught and laboratory-reared larvae of the common sole Solea solea (L.), given appropriate temperatures and nourishment of the latter. But from hatching to mouth formation. and during metamorphosis, sole otoliths have proven difficult to read because the increments may be less regular and low contrast. In this study, the growth increments in otoliths of larvae reared at 12 degrees C were counted by light microscopy to test the hypothesis of daily deposition, with some results verified using scanning electron microscopy (SEM), and by image analysis in order to compare the reliability of the 2 methods in age estimation. Age was first estimated (in days posthatch) from light micrographs of whole mounted otoliths. Counts were initiated from the increment formed at the time of month opening (Day 4). The average incremental deposition rate was consistent with the daily hypothesis. However, the light-micrograph readings tended to underestimate the mean ages of the larvae. Errors were probably associated with the low-contrast increments: those deposited after the mouth formation during the transition to first feeding, and those deposited from the onset of eye migration (about 20 d posthatch) during metamorphosis. SEM failed to resolve these low-contrast areas accurately because of poor etching. A method using image analysis was applied to a subsample of micrograph-counted otoliths. The image analysis was supported by an algorithm of pattern recognition (Growth Demodulation Algorithm, GDA). On each otolith, the GDA method integrated the growth pattern of these larval otoliths to averaged data from different radial profiles, in order to demodulate the exponential trend of the signal before spectral analysis (Fast Fourier Transformation, FFT). This second method both allowed more precise designation of increments, particularly for low-contrast areas, and more accurate readings but increased error in mean age estimation. The variability is probably due to a still rough perception of otolith increments by the GDA method, counting being achieved through a theoretical exponential pattern and mean estimates being given by FFT. Although this error variability was greater than expected, the method provides for improvement in both speed and accuracy in otolith readings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mass spectrometry (MS)-based proteomics has seen significant technical advances during the past two decades and mass spectrometry has become a central tool in many biosciences. Despite the popularity of MS-based methods, the handling of the systematic non-biological variation in the data remains a common problem. This biasing variation can result from several sources ranging from sample handling to differences caused by the instrumentation. Normalization is the procedure which aims to account for this biasing variation and make samples comparable. Many normalization methods commonly used in proteomics have been adapted from the DNA-microarray world. Studies comparing normalization methods with proteomics data sets using some variability measures exist. However, a more thorough comparison looking at the quantitative and qualitative differences of the performance of the different normalization methods and at their ability in preserving the true differential expression signal of proteins, is lacking. In this thesis, several popular and widely used normalization methods (the Linear regression normalization, Local regression normalization, Variance stabilizing normalization, Quantile-normalization, Median central tendency normalization and also variants of some of the forementioned methods), representing different strategies in normalization are being compared and evaluated with a benchmark spike-in proteomics data set. The normalization methods are evaluated in several ways. The performance of the normalization methods is evaluated qualitatively and quantitatively on a global scale and in pairwise comparisons of sample groups. In addition, it is investigated, whether performing the normalization globally on the whole data or pairwise for the comparison pairs examined, affects the performance of the normalization method in normalizing the data and preserving the true differential expression signal. In this thesis, both major and minor differences in the performance of the different normalization methods were found. Also, the way in which the normalization was performed (global normalization of the whole data or pairwise normalization of the comparison pair) affected the performance of some of the methods in pairwise comparisons. Differences among variants of the same methods were also observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

POSTDATA is a 5 year's European Research Council (ERC) Starting Grant Project that started in May 2016 and is hosted by the Universidad Nacional de Educación a Distancia (UNED), Madrid, Spain. The context of the project is the corpora of European Poetry (EP), with a special focus on poetic materials from different languages and literary traditions. POSTDATA aims to offer a standardized model in the philological field and a metadata application profile (MAP) for EP in order to build a common classification of all these poetic materials. The information of Spanish, Italian and French repertoires will be published in the Linked Open Data (LOD) ecosystem. Later we expect to extend the model to include additional corpora. There are a number of Web Based Information Systems in Europe with repertoires of poems available to human consumption but not in an appropriate condition to be accessible and reusable by the Semantic Web. These systems are not interoperable; they are in fact locked in their databases and proprietary software, not suitable to be linked in the Semantic Web. A way to make this data interoperable is to develop a MAP in order to be able to publish this data available in the LOD ecosystem, and also to publish new data that will be created and modeled based on this MAP. To create a common data model for EP is not simple since the existent data models are based on conceptualizations and terminology belonging to their own poetical traditions and each tradition has developed an idiosyncratic analytical terminology in a different and independent way for years. The result of this uncoordinated evolution is a set of varied terminologies to explain analogous metrical phenomena through the different poetic systems whose correspondences have been hardly studied – see examples in González-Blanco & Rodríguez (2014a and b). This work has to be done by domain experts before the modeling actually starts. On the other hand, the development of a MAP is a complex task though it is imperative to follow a method for this development. The last years Curado Malta & Baptista (2012, 2013a, 2013b) have been studying the development of MAP's in a Design Science Research (DSR) methodological process in order to define a method for the development of MAPs (see Curado Malta (2014)). The output of this DSR process was a first version of a method for the development of Metadata Application Profiles (Me4MAP) (paper to be published). The DSR process is now in the validation phase of the Relevance Cycle to validate Me4MAP. The development of this MAP for poetry will follow the guidelines of Me4MAP and this development will be used to do the validation of Me4MAP. The final goal of the POSTDATA project is: i) to be able to publish all the data locked in the WIS, in LOD, where any agent interested will be able to build applications over the data in order to serve final users; ii) to build a Web platform where: a) researchers, students and other final users interested in EP will be able to access poems (and their analyses) of all databases; b) researchers, students and other final users will be able to upload poems, the digitalized images of manuscripts, and fill in the information concerning the analysis of the poem, collaboratively contributing to a LOD dataset of poetry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation comprises three chapters. The first chapter motivates the use of a novel data set combining survey and administrative sources for the study of internal labor migration. By following a sample of individuals from the American Community Survey (ACS) across their employment outcomes over time according to the Longitudinal Employer-Household Dynamics (LEHD) database, I construct a measure of geographic labor mobility that allows me to exploit information about individuals prior to their move. This enables me to explore aspects of the migration decision, such as homeownership and employment status, in ways that have not previously been possible. In the second chapter, I use this data set to test the theory that falling home prices affect a worker’s propensity to take a job in a different metropolitan area from where he is currently located. Employing a within-CBSA and time estimation that compares homeowners to renters in their propensities to relocate for jobs, I find that homeowners who have experienced declines in the nominal value of their homes are approximately 12% less likely than average to take a new job in a location outside of the metropolitan area where they currently reside. This evidence is consistent with the hypothesis that housing lock-in has contributed to the decline in labor mobility of homeowners during the recent housing bust. The third chapter focuses on a sample of unemployed workers in the same data set, in order to compare the unemployment durations of those who find subsequent employment by relocating to a new metropolitan area, versus those who find employment in their original location. Using an instrumental variables strategy to address the endogeneity of the migration decision, I find that out-migrating for a new job significantly reduces the time to re-employment. These results stand in contrast to OLS estimates, which suggest that those who move have longer unemployment durations. This implies that those who migrate for jobs in the data may be particularly disadvantaged in their ability to find employment, and thus have strong short-term incentives to relocate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study is to explore the relationship between various collegiate experiences including substance use, religiosity, campus climate, academic life, social life, self-concept, satisfaction with college, and perceived feelings of depression among Asian American college students compared to other racial groups. Employing Astin’s (1993) I-E-O model, the study utilized the 2008 Cooperative Institutional Research Program (CIRP) the Freshman Survey (TFS) and the follow up College Senior Survey (CSS) in 2012 with the final sample of 10,710 students including 951 Asian American students. Descriptive analysis, cross-tabulations, blocked hierarchical multiple regression analysis, the equality of the unstandardized beta coefficients from the regression analyses, and a one-way ANOVA were conducted for the data analysis. Asian American students who are female, from low SES backgrounds, academically less achieved, frequent substance users, less religiously involved, and less satisfied with overall college experiences showed higher levels of feeling depressed. For the rate of feeling depressed across racial groups, Asian American college students showed the highest rate of feeling depressed while White students reported the lowest rate of feeling depressed. For Asian American college students, feeling depressed in high school, hours spent per week on studying and homework, and self-confidence in intellectual ability were the most significant predictors of feelings of depression while drinking beer, drinking liquor, spirituality, failing to complete homework on time, hours spent per week on socializing, self rated self-confidence in social ability, and satisfaction with overall college experiences were significant predictors of feelings of depression. Asian American college students spent the longest hours on studying and homework, reported the highest GPA, but showed the lowest self-confidence on intellectual ability. For all four racial groups, feeling depressed in high school and self-confidence in intellectual ability were significant predictors of feelings of depression in common. Implications for practice and directions for future research emphasize the need for better understanding the unique cultural background and impact of academic life associated with feelings of depression among Asian American college students and developing customized psycho-educational and outreach programs to meet unique needs for psychological well-being for each racial group on campus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Students often receive instruction from specialists, professionals other than their general educators, such as special educators, reading specialists, and ESOL (English Speakers of Other Languages) teachers. The purpose of this study was to examine how general educators and specialists develop collaborative relationships over time within the context of receiving professional development. While collaboration is considered essential to increasing student achievement, improving teachers’ practice, and creating comprehensive school reform, collaborative partnerships take time to develop and require multiple sources of support. Additionally, both practitioners and researchers often conflate collaboration with structural reforms such as co-teaching. This study used a retrospective single case study with a grounded theory approach to analysis. Data were collected through semi-structured interviews with thirteen teachers and an administrator after three workshops were conducted throughout the school year. The theory, Cultivating Interprofessional Collaboration, describes how interprofessional relationships grow as teachers engage in a cycle of learning, constructing partnership, and reflecting. As relationships deepen some partners experience a seamless dimension to their work. A variety of intrapersonal, interpersonal, and external factors work in concert to promote this growth, which is strengthened through professional development. In this theory, professional development provides a common ground for strengthening relationships, knowledge about the collaborative process, and a reflective space to create new collaborative practices. Effective collaborative practice can lead to aligned instruction and teachers’ own professional growth. This study has implications for school interventions, professional development, and future research on collaboration in schools.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dinoflagellates possess large genomes in which most genes are present in many copies. This has made studies of their genomic organization and phylogenetics challenging. Recent advances in sequencing technology have made deep sequencing of dinoflagellate transcriptomes feasible. This dissertation investigates the genomic organization of dinoflagellates to better understand the challenges of assembling dinoflagellate transcriptomic and genomic data from short read sequencing methods, and develops new techniques that utilize deep sequencing data to identify orthologous genes across a diverse set of taxa. To better understand the genomic organization of dinoflagellates, a genomic cosmid clone of the tandemly repeated gene Alchohol Dehydrogenase (AHD) was sequenced and analyzed. The organization of this clone was found to be counter to prevailing hypotheses of genomic organization in dinoflagellates. Further, a new non-canonical splicing motif was described that could greatly improve the automated modeling and annotation of genomic data. A custom phylogenetic marker discovery pipeline, incorporating methods that leverage the statistical power of large data sets was written. A case study on Stramenopiles was undertaken to test the utility in resolving relationships between known groups as well as the phylogenetic affinity of seven unknown taxa. The pipeline generated a set of 373 genes useful as phylogenetic markers that successfully resolved relationships among the major groups of Stramenopiles, and placed all unknown taxa on the tree with strong bootstrap support. This pipeline was then used to discover 668 genes useful as phylogenetic markers in dinoflagellates. Phylogenetic analysis of 58 dinoflagellates, using this set of markers, produced a phylogeny with good support of all branches. The Suessiales were found to be sister to the Peridinales. The Prorocentrales formed a monophyletic group with the Dinophysiales that was sister to the Gonyaulacales. The Gymnodinales was found to be paraphyletic, forming three monophyletic groups. While this pipeline was used to find phylogenetic markers, it will likely also be useful for finding orthologs of interest for other purposes, for the discovery of horizontally transferred genes, and for the separation of sequences in metagenomic data sets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of this study is to provide a framework for future researchers to understand and use the FARSITE wildfire-forecasting model with data assimilation. Current wildfire models lack the ability to provide accurate prediction of fire front position faster than real-time. When FARSITE is coupled with a recursive ensemble filter, the data assimilation forecast method improves. The scope includes an explanation of the standalone FARSITE application, technical details on FARSITE integration with a parallel program coupler called OpenPALM, and a model demonstration of the FARSITE-Ensemble Kalman Filter software using the FireFlux I experiment by Craig Clements. The results show that the fire front forecast is improved with the proposed data-driven methodology than with the standalone FARSITE model.