926 resultados para Unicode Common Locale Data Repository
Resumo:
Scent-marking behavior is associated with different behavioral contexts in callitrichids, including signalizing a territory, location of feeding resources, and social rank. In marmosets and tamarins it is also associated with intersexual communication. Though it appears very important for the daily routine of the individuals, very few researchers have investigated distribution through the 24-h cycle. In a preliminary report, we described a preferential incidence of this behavior 2 h before nocturnal rest in families of common marmosets. We expand the data using 8 family groups (28 subjects), 8 fathers, 6 mothers, 8 nonreproductive adults (4 sons and 4 daughters), and 6 juvenile (3 sons and 3 daughters) offspring that we kept in outdoor cages under natural environmental conditions. We recorded the frequency of anogenital scent marking for each group during the light phase, twice a wk, for 4 consecutive wks, from March 1998 to September 1999. Cosinor test detected 24- and 8-h variations in 89.3% and 85.7% of the subjects, respectively, regardless of sex or reproductive status. The 8-h component is a consequence of the 2 peaks for the behavior, at the beginning and end of the light phase. Daily distribution of scent marking is similar to that others described previously for motor activity in marmosets. The coincident rhythmical patterns for both behaviors seem to be associated with feeding behavior, as described for callitrichids in free-ranging conditions, involving an increase in foraging activities early in the morning and shortly before nocturnal rest
Resumo:
This paper considers identification of treatment effects when the outcome variables and covari-ates are not observed in the same data sets. Ecological inference models, where aggregate out-come information is combined with individual demographic information, are a common example of these situations. In this context, the counterfactual distributions and the treatment effects are not point identified. However, recent results provide bounds to partially identify causal effects. Unlike previous works, this paper adopts the selection on unobservables assumption, which means that randomization of treatment assignments is not achieved until time fixed unobserved heterogeneity is controlled for. Panel data models linear in the unobserved components are con-sidered to achieve identification. To assess the performance of these bounds, this paper provides a simulation exercise.
Resumo:
In 2005, the University of Maryland acquired over 70 digital videos spanning 35 years of Jim Henson’s groundbreaking work in television and film. To support in-house discovery and use, the collection was cataloged in detail using AACR2 and MARC21, and a web-based finding aid was also created. In the past year, I created an "r-ball" (a linked data set described using RDA) of these same resources. The presentation will compare and contrast these three ways of accessing the Jim Henson Works collection, with insights gleaned from providing resource discovery using RIMMF (RDA in Many Metadata Formats).
Resumo:
Presentation from the MARAC conference in Pittsburgh, PA on April 14–16, 2016. S13 - Student Poster Session; Analysis of Federal Policy on Public Access to Scientific Research Data
Resumo:
SOUSA,M.B.C. et al. Reproductive Patterns and Birth Seasonality in a South-American Breeding Colony of Common Marmosets, Callithrix jacchus. Primates, v.40, n.2, p. 327-336, Apr. 1999.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
The study of forest re activity, in its several aspects, is essencial to understand the phenomenon and to prevent environmental public catastrophes. In this context the analysis of monthly number of res along several years is one aspect to have into account in order to better comprehend this tematic. The goal of this work is to analyze the monthly number of forest res in the neighboring districts of Aveiro and Coimbra, Portugal, through dynamic factor models for bivariate count series. We use a bayesian approach, through MCMC methods, to estimate the model parameters as well as to estimate the common latent factor to both series.
Resumo:
Certain environments can inhibit learning and stifle enthusiasm, while others enhance learning or stimulate curiosity. Furthermore, in a world where technological change is accelerating we could ask how might architecture connect resource abundant and resource scarce innovation environments? Innovation environments developed out of necessity within urban villages and those developed with high intention and expectation within more institutionalized settings share a framework of opportunity for addressing change through learning and education. This thesis investigates formal and informal learning environments and how architecture can stimulate curiosity, enrich learning, create common ground, and expand access to education. The reason for this thesis exploration is to better understand how architects might design inclusive environments that bring people together to build sustainable infrastructure encouraging innovation and adaptation to change for years to come. The context of this thesis is largely based on Colin McFarlane’s theory that the “city is an assemblage for learning” The socio-spatial perspective in urbanism, considers how built infrastructure and society interact. Through the urban realm, inhabitants learn to negotiate people, space, politics, and resources affecting their daily lives. The city is therefore a dynamic field of emergent possibility. This thesis uses the city as a lens through which the boundaries between informal and formal logics as well as the public and private might be blurred. Through analytical processes I have examined the environmental devices and assemblage of factors that consistently provide conditions through which learning may thrive. These parameters that make a creative space significant can help suggest the design of common ground environments through which innovation is catalyzed.
Resumo:
Large component-based systems are often built from many of the same components. As individual component-based software systems are developed, tested and maintained, these shared components are repeatedly manipulated. As a result there are often significant overlaps and synergies across and among the different test efforts of different component-based systems. However, in practice, testers of different systems rarely collaborate, taking a test-all-by-yourself approach. As a result, redundant effort is spent testing common components, and important information that could be used to improve testing quality is lost. The goal of this research is to demonstrate that, if done properly, testers of shared software components can save effort by avoiding redundant work, and can improve the test effectiveness for each component as well as for each component-based software system by using information obtained when testing across multiple components. To achieve this goal I have developed collaborative testing techniques and tools for developers and testers of component-based systems with shared components, applied the techniques to subject systems, and evaluated the cost and effectiveness of applying the techniques. The dissertation research is organized in three parts. First, I investigated current testing practices for component-based software systems to find the testing overlap and synergy we conjectured exists. Second, I designed and implemented infrastructure and related tools to facilitate communication and data sharing between testers. Third, I designed two testing processes to implement different collaborative testing algorithms and applied them to large actively developed software systems. This dissertation has shown the benefits of collaborative testing across component developers who share their components. With collaborative testing, researchers can design algorithms and tools to support collaboration processes, achieve better efficiency in testing configurations, and discover inter-component compatibility faults within a minimal time window after they are introduced.
Resumo:
Increases in pediatric thyroid cancer incidence could be partly due to previous clinical intervention. This retrospective cohort study used 1973-2012 data from the Surveillance Epidemiology and End Results program to assess the association between previous radiation therapy exposure in development of second primary thyroid cancer (SPTC) among 0-19-year-old children. Statistical analysis included the calculation of summary statistics and univariable and multivariable logistic regression analysis. Relative to no previous radiation therapy exposure, cases exposed to radiation had 2.46 times the odds of developing SPTC (95% CI: 1.39-4.34). After adjustment for sex and age at diagnosis, Hispanic children who received radiation therapy for a first primary malignancy had 3.51 times the odds of developing SPTC compared to Hispanic children who had not received radiation therapy, [AOR=3.51, 99% CI: 0.69-17.70, p=0.04]. These findings support the development of age-specific guidelines for the use of radiation based interventions among children with and without cancer.
Resumo:
Responsible Research Data Management (RDM) is a pillar of quality research. In practice good RDM requires the support of a well-functioning Research Data Infrastructure (RDI). One of the challenges the research community is facing is how to fund the management of research data and the required infrastructure. Knowledge Exchange and Science Europe have both defined activities to explore how RDM/RDI are, or can be, funded. Independently they each planned to survey users and providers of data services and on becoming aware of the similar objectives and approaches, the Science Europe Working Group on Research Data and the Knowledge Exchange Research Data expert group joined forces and devised a joint activity to to inform the discussion on the funding of RDM/RDI in Europe.
Resumo:
Sharpening is a powerful image transformation because sharp edges can bring out image details. Sharpness is achieved by increasing local contrast and reducing edge widths. We present a method that enhances sharpness of images and thereby their perceptual quality. Most existing enhancement techniques require user input to improve the perception of the scene in a manner most pleasing to the particular user. Our goal of image enhancement is to improve the perception of sharpness in digital images for human viewers. We consider two parameters in order to exaggerate the differences between local intensities. The two parameters exploit local contrast and widths of edges. We start from the assumption that color, texture, or objects of focus such as faces affect the human perception of photographs. When human raters are presented with a collection of images with different sharpness and asked to rank them according to perceived sharpness, the results have shown that there is a statistical consensus among the raters. We introduce a ramp enhancement technique by modifying the optimal overshoot in the ramp for different region contrasts as well as the new ramp width. Optimal parameter values are searched to be applied to regions under the criteria mentioned above. In this way, we aim to enhance digital images automatically to create pleasing image output for common users.
Resumo:
There is a long history of debate around mathematics standards, reform efforts, and accountability. This research identified ways that national expectations and context drive local implementation of mathematics reform efforts and identified the external and internal factors that impact teachers’ acceptance or resistance to policy implementation at the local level. This research also adds to the body of knowledge about acceptance and resistance to policy implementation efforts. This case study involved the analysis of documents to provide a chronological perspective, assess the current state of the District’s mathematics reform, and determine the District’s readiness to implement the Common Core Curriculum. The school system in question has continued to struggle with meeting the needs of all students in Algebra 1. Therefore, the results of this case study will be useful to the District’s leaders as they include the compilation and analysis of a decade’s worth of data specific to Algebra 1.
Resumo:
Observational studies demonstrate strong associations between deficient serum vitamin D (25(OH)D) levels and cardiovascular disease. To further examine the association between vitamin D and hypertension (HTN), data from the 2003-2006 National Health and Nutrition Examination Survey were analyzed to assess whether the association between vitamin D and HTN varies by sufficiency of key co-nutrients necessary for metabolic vitamin D reactions to occur. Logistic regression results demonstrate independent effect modification by calcium, magnesium, and vitamin A on the association between vitamin D and HTN. Among non-pregnant adults with adequate renal function, those with low levels of calcium, magnesium, and vitamin D levels had 1.75 times the odds of HTN compared to those with sufficient vitamin D levels (p = <0.0001). Additionally, participants with low levels of calcium, magnesium, vitamin A, and vitamin D had 5.43 times the odds of HTN compared to those with vitamin D sufficiency (p = 0.0103).
Resumo:
Following the workshop on new developments in daily licensing practice in November 2011, we brought together fourteen representatives from national consortia (from Denmark, Germany, Netherlands and the UK) and publishers (Elsevier, SAGE and Springer) met in Copenhagen on 9 March 2012 to discuss provisions in licences to accommodate new developments. The one day workshop aimed to: present background and ideas regarding the provisions KE Licensing Expert Group developed; introduce and explain the provisions the invited publishers currently use;ascertain agreement on the wording for long term preservation, continuous access and course packs; give insight and more clarity about the use of open access provisions in licences; discuss a roadmap for inclusion of the provisions in the publishers’ licences; result in report to disseminate the outcome of the meeting. Participants of the workshop were: United Kingdom: Lorraine Estelle (Jisc Collections) Denmark: Lotte Eivor Jørgensen (DEFF), Lone Madsen (Southern University of Denmark), Anne Sandfær (DEFF/Knowledge Exchange) Germany: Hildegard Schaeffler (Bavarian State Library), Markus Brammer (TIB) The Netherlands: Wilma Mossink (SURF), Nol Verhagen (University of Amsterdam), Marc Dupuis (SURF/Knowledge Exchange) Publishers: Alicia Wise (Elsevier), Yvonne Campfens (Springer), Bettina Goerner (Springer), Leo Walford (Sage) Knowledge Exchange: Keith Russell The main outcome of the workshop was that it would be valuable to have a standard set of clauses which could used in negotiations, this would make concluding licences a lot easier and more efficient. The comments on the model provisions the Licensing Expert group had drafted will be taken into account and the provisions will be reformulated. Data and text mining is a new development and demand for access to allow for this is growing. It would be easier if there was a simpler way to access materials so they could be more easily mined. However there are still outstanding questions on how authors of articles that have been mined can be properly attributed.