942 resultados para categorical and mix datasets


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Severe environmental conditions, coupled with the routine use of deicing chemicals and increasing traffic volume, tend to place extreme demands on portland cement concrete (PCC) pavements. In most instances, engineers have been able to specify and build PCC pavements that met these challenges. However, there have also been reports of premature deterioration that could not be specifically attributed to a single cause. Modern concrete mixtures have evolved to become very complex chemical systems. The complexity can be attributed to both the number of ingredients used in any given mixture and the various types and sources of the ingredients supplied to any given project. Local environmental conditions can also influence the outcome of paving projects. This research project investigated important variables that impact the homogeneity and rheology of concrete mixtures. The project consisted of a field study and a laboratory study. The field study collected information from six different projects in Iowa. The information that was collected during the field study documented cementitious material properties, plastic concrete properties, and hardened concrete properties. The laboratory study was used to develop baseline mixture variability information for the field study. It also investigated plastic concrete properties using various new devices to evaluate rheology and mixing efficiency. In addition, the lab study evaluated a strategy for the optimization of mortar and concrete mixtures containing supplementary cementitious materials. The results of the field studies indicated that the quality management concrete (QMC) mixtures being placed in the state generally exhibited good uniformity and good to excellent workability. Hardened concrete properties (compressive strength and hardened air content) were also satisfactory. The uniformity of the raw cementitious materials that were used on the projects could not be monitored as closely as was desired by the investigators; however, the information that was gathered indicated that the bulk chemical composition of most materials streams was reasonably uniform. Specific minerals phases in the cementitious materials were less uniform than the bulk chemical composition. The results of the laboratory study indicated that ternary mixtures show significant promise for improving the performance of concrete mixtures. The lab study also verified the results from prior projects that have indicated that bassanite is typically the major sulfate phase that is present in Iowa cements. This causes the cements to exhibit premature stiffening problems (false set) in laboratory testing. Fly ash helps to reduce the impact of premature stiffening because it behaves like a low-range water reducer in most instances. The premature stiffening problem can also be alleviated by increasing the water–cement ratio of the mixture and providing a remix cycle for the mixture.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this thesis is to allocate business opportunities for the imported Pick and Mix candy concept in Korea. The approach of the study is descriptive-analytical and normative. The aim is firstly define a small theoretical background for firm's internationalization and topics related to this research such as marketing mix concepts. Secondly describe the new marketing area, where the company is aiming and finding the obstacles of entry and defining shortly the four P's concept what is used by this research. This research found that Korea is a potential market for imported Pick and Mix candy concept. Research proves that Korean consumers are having enough money to spent for items, which increase the quality of life. This research determined that Korean economy will have positive trend in the near future. It is also found in the study that operation costs of running a business in Korea is less than in western world.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Through the use of rhetoric centered on authority and risk avoidance, scientific method has co-opted knowledge, especially women's everyday and experiential knowledge in the domestic sphere. This, in turn, has produced a profound affect on technical communication in the present day. I am drawing on rhetorical theory to study cookbooks and recipes for their contributions to changes in instructional texts. Using the rhetorical lenses of metis (cunning intelligence), kairos (timing and fitness) and mneme (memory), I examine the way in which recipes and cookbooks are constructed, used and perceived. This helps me uncover lost voices in history, the voices of women who used recipes, produced cookbooks and changed the way instructions read. Beginning with the earliest cookbooks and recipes, but focusing on the pivotal temporal interval of 1870-1935, I investigate the writing and rhetorical forces shaping instruction sets and domestic discourse. By the time of scientific cooking and domestic science, everyday and experiential knowledge were being excluded to make room for scientific method and the industrial values of the public sphere. In this study, I also assess how the public sphere, via Cooperative Extension Services and other government agencies, impacted the domestic sphere, further devaluing everyday knowledge in favor of the public scientific model. I will show how the changes in the production of food, cookbooks and recipes were related to changes in technical communication. These changes had wide rippling effects on the field of technical communication. By returning to some of the tenets and traditions of everyday and experiential knowledge, technical communication scholars, practitioners and instructors today can find new ways to encounter technical communication, specifically regarding the creation of instructional texts. Bringing cookbooks, recipes and everyday knowledge into the classroom and the field engenders a new realm of epistemological possibilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Economists and other social scientists often face situations where they have access to two datasets that they can use but one set of data suffers from censoring or truncation. If the censored sample is much bigger than the uncensored sample, it is common for researchers to use the censored sample alone and attempt to deal with the problem of partial observation in some manner. Alternatively, they simply use only the uncensored sample and ignore the censored one so as to avoid biases. It is rarely the case that researchers use both datasets together, mainly because they lack guidance about how to combine them. In this paper, we develop a tractable semiparametric framework for combining the censored and uncensored datasets so that the resulting estimators are consistent, asymptotically normal, and use all information optimally. When the censored sample, which we refer to as the master sample, is much bigger than the uncensored sample (which we call the refreshment sample), the latter can be thought of as providing identification where it is otherwise absent. In contrast, when the refreshment sample is large and could typically be used alone, our methodology can be interpreted as using information from the censored sample to increase effciency. To illustrate our results in an empirical setting, we show how to estimate the effect of changes in compulsory schooling laws on age at first marriage, a variable that is censored for younger individuals. We also demonstrate how refreshment samples for this application can be created by matching cohort information across census datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sediments from five Leg 167 drill sites and three piston cores were analyzed for Corg and CaCO3. Oxygen isotope stratigraphy on benthic foraminifers was used to assign age models to these sedimentary records. We find that the northern and central California margin is characterized by k.y.-scale events that can be found in both the CaCO3 and Corg time series. We show that the CaCO3 events are caused by changes in CaCO3 production by plankton, not by dissolution. We also show that these CaCO3 events occur in marine isotope Stages (MIS) 2, 3, and 4 during Dansgaard/Oeschger interstadials. They occur most strongly, however, on the MIS 5/4 glaciation and MIS 2/1 deglaciation. We believe that the link between the northeastern Pacific Ocean and North Atlantic is primarily transmitted by the atmosphere, not the ocean. Highest CaCO3 production and burial occurs when the surface ocean is somewhat cooler than the modern ocean, and the surface mixed layer is somewhat more stable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Emission inventories are databases that aim to describe the polluting activities that occur across a certain geographic domain. According to the spatial scale, the availability of information will vary as well as the applied assumptions, which will strongly influence its quality, accuracy and representativeness. This study compared and contrasted two emission inventories describing the Greater Madrid Region (GMR) under an air quality simulation approach. The chosen inventories were the National Emissions Inventory (NEI) and the Regional Emissions Inventory of the Greater Madrid Region (REI). Both of them were used to feed air quality simulations with the CMAQ modelling system, and the results were compared with observations from the air quality monitoring network in the modelled domain. Through the application of statistical tools, the analysis of emissions at cell level and cell – expansion procedures, it was observed that the National Inventory showed better results for describing on – road traffic activities and agriculture, SNAP07 and SNAP10. The accurate description of activities, the good characterization of the vehicle fleet and the correct use of traffic emission factors were the main causes of such a good correlation. On the other hand, the Regional Inventory showed better descriptions for non – industrial combustion (SNAP02) and industrial activities (SNAP03). It incorporated realistic emission factors, a reasonable fuel mix and it drew upon local information sources to describe these activities, while NEI relied on surrogation and national datasets which leaded to a poorer representation. Off – road transportation (SNAP08) was similarly described by both inventories, while the rest of the SNAP activities showed a marginal contribution to the overall emissions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-throughput screening of physical, genetic and chemical-genetic interactions brings important perspectives in the Systems Biology field, as the analysis of these interactions provides new insights into protein/gene function, cellular metabolic variations and the validation of therapeutic targets and drug design. However, such analysis depends on a pipeline connecting different tools that can automatically integrate data from diverse sources and result in a more comprehensive dataset that can be properly interpreted. We describe here the Integrated Interactome System (IIS), an integrative platform with a web-based interface for the annotation, analysis and visualization of the interaction profiles of proteins/genes, metabolites and drugs of interest. IIS works in four connected modules: (i) Submission module, which receives raw data derived from Sanger sequencing (e.g. two-hybrid system); (ii) Search module, which enables the user to search for the processed reads to be assembled into contigs/singlets, or for lists of proteins/genes, metabolites and drugs of interest, and add them to the project; (iii) Annotation module, which assigns annotations from several databases for the contigs/singlets or lists of proteins/genes, generating tables with automatic annotation that can be manually curated; and (iv) Interactome module, which maps the contigs/singlets or the uploaded lists to entries in our integrated database, building networks that gather novel identified interactions, protein and metabolite expression/concentration levels, subcellular localization and computed topological metrics, GO biological processes and KEGG pathways enrichment. This module generates a XGMML file that can be imported into Cytoscape or be visualized directly on the web. We have developed IIS by the integration of diverse databases following the need of appropriate tools for a systematic analysis of physical, genetic and chemical-genetic interactions. IIS was validated with yeast two-hybrid, proteomics and metabolomics datasets, but it is also extendable to other datasets. IIS is freely available online at: http://www.lge.ibi.unicamp.br/lnbio/IIS/.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In geophysics and seismology, raw data need to be processed to generate useful information that can be turned into knowledge by researchers. The number of sensors that are acquiring raw data is increasing rapidly. Without good data management systems, more time can be spent in querying and preparing datasets for analyses than in acquiring raw data. Also, a lot of good quality data acquired at great effort can be lost forever if they are not correctly stored. Local and international cooperation will probably be reduced, and a lot of data will never become scientific knowledge. For this reason, the Seismological Laboratory of the Institute of Astronomy, Geophysics and Atmospheric Sciences at the University of Sao Paulo (IAG-USP) has concentrated fully on its data management system. This report describes the efforts of the IAG-USP to set up a seismology data management system to facilitate local and international cooperation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In repair works of reinforced concrete, patch repairs tend to crack in the interfacial zone between the mortar and the old concrete. This occurs basically due to the high degree of restriction that acts on a patch repair. For this reason, the technology of patch repair needs to be the subject of a discussion involving professionals who work with projects, construction maintenance and mix proportioning of repair mortars. In the present work, a study is presented on the benefits that the ethylene vinyl acetate copolymer (EVA) and acrylate polymers can provide in the mix proportioning of a repair mortar with respect to compressive, tensile and direct-shear bond strength. The results indicated that the increase in bond strength and the reduction in the influence of the deficiency in Curing conditioning are the main contributions offered by the polymers studied here. (C) 2009 Elsevier, Ltd. All rights reserved.