926 resultados para Unicode Common Locale Data Repository


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Skates (Rajidae) have been commercially exploited in Europe for hundreds of years with some species’ abundances declining dramatically during the twentieth century. In 2009 it became “prohibited for EU vessels to target, retain, tranship or land” certain species in some ICES areas, including the critically endangered common skate and the endangered white skate. To examine compliance with skate bans the official UK landings data for 2011–2014 were analysed. Surprisingly, it was found that after the ban prohibited species were still reported landed in UK ports, including 9.6 t of common skate during 2011–2014. The majority of reported landings of common and white skate were from northern UK waters and landed into northern UK ports. Although past landings could not be validated as being actual prohibited species, the landings’ patterns found reflect known abundance distributions that suggest actual landings were made, rather than sporadic occurrence across ports that would be evident if landings were solely due to systematic misidentification or data entry errors. Nevertheless, misreporting and data entry errors could not be discounted as factors contributing to the recorded landings of prohibited species. These findings raise questions about the efficacy of current systems to police skate landings to ensure prohibited species remain protected. By identifying UK ports with the highest apparent landings of prohibited species and those still landing species grouped as'skates and rays’, these results may aid authorities in allocating limited resources more effectively to reduce landings, misreporting and data errors of prohibited species, and increase species-specific landing compliance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the leading motivations behind the multilingual semantic web is to make resources accessible digitally in an online global multilingual context. Consequently, it is fundamental for knowledge bases to find a way to manage multilingualism and thus be equipped with those procedures for its conceptual modelling. In this context, the goal of this paper is to discuss how common-sense knowledge and cultural knowledge are modelled in a multilingual framework. More particularly, multilingualism and conceptual modelling are dealt with from the perspective of FunGramKB, a lexico-conceptual knowledge base for natural language understanding. This project argues for a clear division between the lexical and the conceptual dimensions of knowledge. Moreover, the conceptual layer is organized into three modules, which result from a strong commitment towards capturing semantic knowledge (Ontology), procedural knowledge (Cognicon) and episodic knowledge (Onomasticon). Cultural mismatches are discussed and formally represented at the three conceptual levels of FunGramKB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El presente artículo plantea una definición ampliada del concepto de seguridad energética, yendo más allá del concepto clásico establecido por la Agencia Internacional de la Energía, incorporando cuestiones relativas a la eficiencia energética, la aceptabilidad del modelo energético y los retos que impone el cambio climático, pero sin perder de perspectiva las exigencias y las dinámicas competitivas económicas globales. Sobre la base de este concepto ampliado, se examina la evolución de la seguridad energética en el marco de la Unión Europea, con una atención particular a cómo se concibe la seguridad energética en la Estrategia Global de Seguridad de 2016.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Repositories containing high quality human biospecimens linked with robust and relevant clinical and pathological information are required for the discovery and validation of biomarkers for disease diagnosis, progression and response to treatment. Current molecular based discovery projects using either low or high throughput technologies rely heavily on ready access to such sample collections. It is imperative that modern biobanks align with molecular diagnostic pathology practices not only to provide the type of samples needed for discovery projects but also to ensure requirements for ongoing sample collections and the future needs of researchers are adequately addressed. Biobanks within comprehensive molecular pathology programmes are perfectly positioned to offer more than just tumour derived biospecimens; for example, they have the ability to facilitate researchers gaining access to sample metadata such as digitised scans of tissue samples annotated prior to macrodissection for molecular diagnostics or pseudoanonymised clinical outcome data or research results retrieved from other users utilising the same or overlapping cohorts of samples. Furthermore, biobanks can work with molecular diagnostic laboratories to develop standardized methodologies for the acquisition and storage of samples required for new approaches to research such as ‘liquid biopsies’ which will ultimately feed into the test validations required in large prospective clinical studies in order to implement liquid biopsy approaches for routine clinical practice. We draw on our experience in Northern Ireland to discuss how this harmonised approach of biobanks working synergistically with molecular pathology programmes is key for the future success of precision medicine.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robust joint modelling is an emerging field of research. Through the advancements in electronic patient healthcare records, the popularly of joint modelling approaches has grown rapidly in recent years providing simultaneous analysis of longitudinal and survival data. This research advances previous work through the development of a novel robust joint modelling methodology for one of the most common types of standard joint models, that which links a linear mixed model with a Cox proportional hazards model. Through t-distributional assumptions, longitudinal outliers are accommodated with their detrimental impact being down weighed and thus providing more efficient and reliable estimates. The robust joint modelling technique and its major benefits are showcased through the analysis of Northern Irish end stage renal disease patients. With an ageing population and growing prevalence of chronic kidney disease within the United Kingdom, there is a pressing demand to investigate the detrimental relationship between the changing haemoglobin levels of haemodialysis patients and their survival. As outliers within the NI renal data were found to have significantly worse survival, identification of outlying individuals through robust joint modelling may aid nephrologists to improve patient's survival. A simulation study was also undertaken to explore the difference between robust and standard joint models in the presence of increasing proportions and extremity of longitudinal outliers. More efficient and reliable estimates were obtained by robust joint models with increasing contrast between the robust and standard joint models when a greater proportion of more extreme outliers are present. Through illustration of the gains in efficiency and reliability of parameters when outliers exist, the potential of robust joint modelling is evident. The research presented in this thesis highlights the benefits and stresses the need to utilise a more robust approach to joint modelling in the presence of longitudinal outliers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Safety on public transport is a major concern for the relevant authorities. We
address this issue by proposing an automated surveillance platform which combines data from video, infrared and pressure sensors. Data homogenisation and integration is achieved by a distributed architecture based on communication middleware that resolves interconnection issues, thereby enabling data modelling. A common-sense knowledge base models and encodes knowledge about public-transport platforms and the actions and activities of passengers. Trajectory data from passengers is modelled as a time-series of human activities. Common-sense knowledge and rules are then applied to detect inconsistencies or errors in the data interpretation. Lastly, the rationality that characterises human behaviour is also captured here through a bottom-up Hierarchical Task Network planner that, along with common-sense, corrects misinterpretations to explain passenger behaviour. The system is validated using a simulated bus saloon scenario as a case-study. Eighteen video sequences were recorded with up to six passengers. Four metrics were used to evaluate performance. The system, with an accuracy greater than 90% for each of the four metrics, was found to outperform a rule-base system and a system containing planning alone.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identification of subjects at high risk for Alzheimer’s disease is important for prognosis and early intervention. We investigated the polygenic architecture of Alzheimer’s disease and the accuracy of Alzheimer’s disease prediction models, including and excluding the polygenic component in the model. This study used genotype data from the powerful dataset comprising 17 008 cases and 37 154 controls obtained from the International Genomics of Alzheimer’s Project (IGAP). Polygenic score analysis tested whether the alleles identified to associate with disease in one sample set were significantly enriched in the cases relative to the controls in an independent sample. The disease prediction accuracy was investigated in a subset of the IGAP data, a sample of 3049 cases and 1554 controls (for whom APOE genotype data were available) by means of sensitivity, specificity, area under the receiver operating characteristic curve (AUC) and positive and negative predictive values. We observed significant evidence for a polygenic component enriched in Alzheimer’s disease (P = 4.9 × 10−26). This enrichment remained significant after APOE and other genome-wide associated regions were excluded (P = 3.4 × 10−19). The best prediction accuracy AUC = 78.2% (95% confidence interval 77–80%) was achieved by a logistic regression model with APOE, the polygenic score, sex and age as predictors. In conclusion, Alzheimer’s disease has a significant polygenic component, which has predictive utility for Alzheimer’s disease risk and could be a valuable research tool complementing experimental designs, including preventative clinical trials, stem cell selection and high/low risk clinical studies. In modelling a range of sample disease prevalences, we found that polygenic scores almost doubles case prediction from chance with increased prediction at polygenic extremes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

After years of deliberation, the EU commission sped up the reform process of a common EU digital policy considerably in 2015 by launching the EU digital single market strategy. In particular, two core initiatives of the strategy were agreed upon: General Data Protection Regulation and the Network and Information Security (NIS) Directive law texts. A new initiative was additionally launched addressing the role of online platforms. This paper focuses on the platform privacy rationale behind the data protection legislation, primarily based on the proposal for a new EU wide General Data Protection Regulation. We analyse the legislation rationale from an Information System perspective to understand the role user data plays in creating platforms that we identify as “processing silos”. Generative digital infrastructure theories are used to explain the innovative mechanisms that are thought to govern the notion of digitalization and successful business models that are affected by digitalization. We foresee continued judicial data protection challenges with the now proposed Regulation as the adoption of the “Internet of Things” continues. The findings of this paper illustrate that many of the existing issues can be addressed through legislation from a platform perspective. We conclude by proposing three modifications to the governing rationale, which would not only improve platform privacy for the data subject, but also entrepreneurial efforts in developing intelligent service platforms. The first modification is aimed at improving service differentiation on platforms by lessening the ability of incumbent global actors to lock-in the user base to their service/platform. The second modification posits limiting the current unwanted tracking ability of syndicates, by separation of authentication and data store services from any processing entity. Thirdly, we propose a change in terms of how security and data protection policies are reviewed, suggesting a third party auditing procedure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Scent-marking behavior is associated with different behavioral contexts in callitrichids, including signalizing a territory, location of feeding resources, and social rank. In marmosets and tamarins it is also associated with intersexual communication. Though it appears very important for the daily routine of the individuals, very few researchers have investigated distribution through the 24-h cycle. In a preliminary report, we described a preferential incidence of this behavior 2 h before nocturnal rest in families of common marmosets. We expand the data using 8 family groups (28 subjects), 8 fathers, 6 mothers, 8 nonreproductive adults (4 sons and 4 daughters), and 6 juvenile (3 sons and 3 daughters) offspring that we kept in outdoor cages under natural environmental conditions. We recorded the frequency of anogenital scent marking for each group during the light phase, twice a wk, for 4 consecutive wks, from March 1998 to September 1999. Cosinor test detected 24- and 8-h variations in 89.3% and 85.7% of the subjects, respectively, regardless of sex or reproductive status. The 8-h component is a consequence of the 2 peaks for the behavior, at the beginning and end of the light phase. Daily distribution of scent marking is similar to that others described previously for motor activity in marmosets. The coincident rhythmical patterns for both behaviors seem to be associated with feeding behavior, as described for callitrichids in free-ranging conditions, involving an increase in foraging activities early in the morning and shortly before nocturnal rest

Relevância:

30.00% 30.00%

Publicador:

Resumo:

SOUSA,M.B.C. et al. Reproductive Patterns and Birth Seasonality in a South-American Breeding Colony of Common Marmosets, Callithrix jacchus. Primates, v.40, n.2, p. 327-336, Apr. 1999.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Heading into the 2020s, Physics and Astronomy are undergoing experimental revolutions that will reshape our picture of the fabric of the Universe. The Large Hadron Collider (LHC), the largest particle physics project in the world, produces 30 petabytes of data annually that need to be sifted through, analysed, and modelled. In astrophysics, the Large Synoptic Survey Telescope (LSST) will be taking a high-resolution image of the full sky every 3 days, leading to data rates of 30 terabytes per night over ten years. These experiments endeavour to answer the question why 96% of the content of the universe currently elude our physical understanding. Both the LHC and LSST share the 5-dimensional nature of their data, with position, energy and time being the fundamental axes. This talk will present an overview of the experiments and data that is gathered, and outlines the challenges in extracting information. Common strategies employed are very similar to industrial data! Science problems (e.g., data filtering, machine learning, statistical interpretation) and provide a seed for exchange of knowledge between academia and industry. Speaker Biography Professor Mark Sullivan Mark Sullivan is a Professor of Astrophysics in the Department of Physics and Astronomy. Mark completed his PhD at Cambridge, and following postdoctoral study in Durham, Toronto and Oxford, now leads a research group at Southampton studying dark energy using exploding stars called "type Ia supernovae". Mark has many years' experience of research that involves repeatedly imaging the night sky to track the arrival of transient objects, involving significant challenges in data handling, processing, classification and analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of remote sensing for monitoring of submerged aquatic vegetation (SAV) in fluvial environments has been limited by the spatial and spectral resolution of available image data. The absorption of light in water also complicates the use of common image analysis methods. This paper presents the results of a study that uses very high resolution (VHR) image data, collected with a Near Infrared sensitive DSLR camera, to map the distribution of SAV species for three sites along the Desselse Nete, a lowland river in Flanders, Belgium. Plant species, including Ranunculus aquatilis L., Callitriche obtusangula Le Gall, Potamogeton natans L., Sparganium emersum L. and Potamogeton crispus L., were classified from the data using Object-Based Image Analysis (OBIA) and expert knowledge. A classification rule set based on a combination of both spectral and structural image variation (e.g. texture and shape) was developed for images from two sites. A comparison of the classifications with manually delineated ground truth maps resulted for both sites in 61% overall accuracy. Application of the rule set to a third validation image, resulted in 53% overall accuracy. These consistent results show promise for species level mapping in such biodiverse environments, but also prompt a discussion on assessment of classification accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Kandidaatintyö on toteutettu kirjallisuuskatsauksena, jonka tavoitteena on selvittää data-analytiikan käyttökohteita ja datan hyödyntämisen vaikutusta liiketoimintaan. Työ käsittelee data-analytiikan käyttöä ja datan tehokkaan hyödyntämisen haasteita. Työ on rajattu tarkastelemaan yrityksen talouden ohjausta, jossa analytiikkaa käytetään johdon ja rahoituksen laskentatoimessa. Datan määrän eksponentiaalinen kasvunopeus luo data-analytiikan käytölle uusia haasteita ja mahdollisuuksia. Datalla itsessään ei kuitenkaan ole suurta arvoa yritykselle, vaan arvo syntyy prosessoinnin kautta. Vaikka data-analytiikkaa tutkitaan ja käytetään jo runsaasti, se tarjoaa paljon nykyisiä sovelluksia suurempia mahdollisuuksia. Yksi työn keskeisimmistä tuloksista on, että data-analytiikalla voidaan tehostaa johdon laskentatoimea ja helpottaa rahoituksen laskentatoimen tehtäviä. Tarjolla olevan datan määrä kasvaa kuitenkin niin nopeasti, että käytettävissä oleva teknologia ja osaamisen taso eivät pysy kehityksessä mukana. Varsinkin big datan laajempi käyttöönotto ja sen tehokas hyödyntäminen vaikuttavat jatkossa talouden ohjauksen käytäntöihin ja sovelluksiin yhä enemmän.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.