873 resultados para Data storage equipment
Resumo:
The diet of early human ancestors has received renewed theoretical interest since the discovery of elevated d13C values in the enamel of Australopithecus africanus and Paranthropus robustus. As a result, the hominin diet is hypothesized to have included C4 grass or the tissues of animals which themselves consumed C4 grass. On mechanical grounds, such a diet is incompatible with the dental morphology and dental microwear of early hominins. Most inferences, particularly for Paranthropus, favor a diet of hard or mechanically resistant foods. This discrepancy has invigorated the longstanding hypothesis that hominins consumed plant underground storage organs (USOs). Plant USOs are attractive candidate foods because many bulbous grasses and cormous sedges use C4 photosynthesis. Yet mechanical data for USOs—or any putative hominin food—are scarcely known. To fill this empirical void we measured the mechanical properties of USOs from 98 plant species from across sub-Saharan Africa. We found that rhizomes were the most resistant to deformation and fracture, followed by tubers, corms, and bulbs. An important result of this study is that corms exhibited low toughness values (mean = 265.0 J m-2) and relatively high Young’s modulus values (mean = 4.9 MPa). This combination of properties fits many descriptions of the hominin diet as consisting of hard-brittle objects. When compared to corms, bulbs are tougher (mean = 325.0 J m-2) and less stiff (mean = 2.5 MPa). Again, this combination of traits resembles dietary inferences, especially for Australopithecus, which is predicted to have consumed soft-tough foods. Lastly, we observed the roasting behavior of Hadza hunter-gatherers and measured the effects of roasting on the toughness on undomesticated tubers. Our results support assumptions that roasting lessens the work of mastication, and, by inference, the cost of digestion. Together these findings provide the first mechanical basis for discussing the adaptive advantages of roasting tubers and the plausibility of USOs in the diet of early hominins.
Resumo:
The role of platelets as inflammatory cells is demonstrated by the fact that they can release many growth factors and inflammatory mediators, including chemokines, when they are activated. The best known platelet chemokine family members are platelet factor 4 (PF4) and beta-thromboglobulin (beta-TG), which are synthesized in megakaryocytes, stored as preformed proteins in alpha-granules and released from activated platelets. However, platelets also contain many other chemokines such as interleukin-8 (IL-8), growth-regulating oncogene-alpha(GRO-alpha), epithelial neutrophil-activating protein 78 (ENA-78), regulated on activation normal T expressed and secreted (RANTES), macrophage inflammatory protein-1alpha (MIP-1alpha), and monocyte chemotactic protein-3 (MCP-3). They also express chemokine receptors such as CCR4, CXCR4, CCR1 and CCR3. Platelet activation is a feature of many inflammatory diseases such as heparin-induced thrombocytopenia, acquired immunodeficiency syndrome, and congestive heart failure. Substantial amounts of PF4, beta-TG and RANTES are released from platelets on activation, which may occur during storage. Although very few data are available on the in vivo effects of transfused chemokines, it has been suggested that the high incidence of adverse reactions often observed after platelet transfusions may be attributed to the chemokines present in the plasma of stored platelet concentrates.
Resumo:
The purpose of this study is to provide a procedure to include emissions to the atmosphere resulting from the combustion of diesel fuel during dredging operations into the decision-making process of dredging equipment selection. The proposed procedure is demonstrated for typical dredging methods and data from the Illinois Waterway as performed by the U.S. Army Corps of Engineers, Rock Island District. The equipment included in this study is a 16-inch cutterhead pipeline dredge and a mechanical bucket dredge used during the 2005 dredging season on the Illinois Waterway. Considerable effort has been put forth to identify and reduce environmental impacts from dredging operations. Though environmental impacts of dredging have been studied no efforts have been applied to the evaluation of air emissions from comparable types of dredging equipment, as in this study. By identifying the type of dredging equipment with the lowest air emissions, when cost, site conditions, and equipment availability are comparable, adverse environmental impacts can be minimized without compromising the dredging project. A total of 48 scenarios were developed by varying the dredged material quantity, transport distance, and production rates. This produced an “envelope” of results applicable to a broad range of site conditions. Total diesel fuel consumed was calculated using standard cost estimating practices as defined in the U.S. Army Corps of Engineers Construction Equipment Ownership and Operating Expense Schedule (USACE, 2005). The diesel fuel usage was estimated for all equipment used to mobilize and/or operate each dredging crew for every scenario. A Limited Life Cycle Assessment (LCA) was used to estimate the air emissions from two comparable dredging operations utilizing SimaPro LCA software. An Environmental Impact Single Score (EISS) was the SimaPro output selected for comparison with the cost per CY of dredging, potential production rates, and transport distances to identify possible decision points. The total dredging time was estimated for each dredging crew and scenario. An average hourly cost for both dredging crews was calculated based on Rock Island District 2005 dredging season records (Graham 2007/08). The results from this study confirm commonly used rules of thumb in the dredging industry by indicating that mechanical bucket dredges are better suited for long transport distances and have lower air emissions and cost per CY for smaller quantities of dredged material. In addition, the results show that a cutterhead pipeline dredge would be preferable for moderate and large volumes of dredged material when no additional booster pumps are required. Finally, the results indicate that production rates can be a significant factor when evaluating the air emissions from comparable dredging equipment.
Resumo:
The purpose of this project was to investigate the effect of using of data collection technology on student attitudes towards science instruction. The study was conducted over the course of two years at Madison High School in Adrian, Michigan, primarily in college preparatory physics classes, but also in one college preparatory chemistry class and one environmental science class. A preliminary study was conducted at a Lenawee County Intermediate Schools student summer environmental science day camp. The data collection technology used was a combination of Texas Instruments TI-84 Silver Plus graphing calculators and Vernier LabPro data collection sleds with various probeware attachments, including motion sensors, pH probes and accelerometers. Students were given written procedures for most laboratory activities and were provided with data tables and analysis questions to answer about the activities. The first year of the study included a pretest and posttest measuring student attitudes towards the class they were enrolled in. Pre-test and post-test data were analyzed to determine effect size, which was found to be very small (Coe, 2002). The second year of the study focused only on a physics class and used Keller’s ARCS model for measuring student motivation based on the four aspects of motivation: Attention, Relevance, Confidence and Satisfaction (Keller, 2010). According to this model, it was found that there were two distinct groups in the class, one of which was motivated to learn and the other that was not. The data suggest that the use of data collection technology in science classes should be started early in a student’s career, possibly in early middle school or late elementary. This would build familiarity with the equipment and allow for greater exploration by the student as they progress through high school and into upper level science courses.
Resumo:
Understanding the canopy cover of an urban environment leads to better estimates of carbon storage and more informed management decisions by urban foresters. The most commonly used method for assessing urban forest cover type extent is ground surveys, which can be both timeconsuming and expensive. The analysis of aerial photos is an alternative method that is faster, cheaper, and can cover a larger number of sites, but may be less accurate. The objectives of this paper were (1) to compare three methods of cover type assessment for Los Angeles, CA: handdelineation of aerial photos in ArcMap, supervised classification of aerial photos in ERDAS Imagine, and ground-collected data using the Urban Forest Effects (UFORE) model protocol; (2) to determine how well remote sensing methods estimate carbon storage as predicted by the UFORE model; and (3) to explore the influence of tree diameter and tree density on carbon storage estimates. Four major cover types (bare ground, fine vegetation, coarse vegetation, and impervious surfaces) were determined from 348 plots (0.039 ha each) randomly stratified according to land-use. Hand-delineation was better than supervised classification at predicting ground-based measurements of cover type and UFORE model-predicted carbon storage. Most error in supervised classification resulted from shadow, which was interpreted as unknown cover type. Neither tree diameter or tree density per plot significantly affected the relationship between carbon storage and canopy cover. The efficiency of remote sensing rather than in situ data collection allows urban forest managers the ability to quickly assess a city and plan accordingly while also preserving their often-limited budget.
Resumo:
The selective catalytic reduction system is a well established technology for NOx emissions control in diesel engines. A one dimensional, single channel selective catalytic reduction (SCR) model was previously developed using Oak Ridge National Laboratory (ORNL) generated reactor data for an iron-zeolite catalyst system. Calibration of this model to fit the experimental reactor data collected at ORNL for a copper-zeolite SCR catalyst is presented. Initially a test protocol was developed in order to investigate the different phenomena responsible for the SCR system response. A SCR model with two distinct types of storage sites was used. The calibration process was started with storage capacity calculations for the catalyst sample. Then the chemical kinetics occurring at each segment of the protocol was investigated. The reactions included in this model were adsorption, desorption, standard SCR, fast SCR, slow SCR, NH3 Oxidation, NO oxidation and N2O formation. The reaction rates were identified for each temperature using a time domain optimization approach. Assuming an Arrhenius form of the reaction rates, activation energies and pre-exponential parameters were fit to the reaction rates. The results indicate that the Arrhenius form is appropriate and the reaction scheme used allows the model to fit to the experimental data and also for use in real world engine studies.
Resumo:
This thesis represents the overview of hydrographic surveying and different types of modern and traditional surveying equipment, and data acquisition using the traditional single beam sonar system and a modern fully autonomous underwater vehicle, IVER3. During the thesis, the data sets were collected using the vehicles of the Great Lake Research Center at Michigan Technological University. This thesis also presents how to process and edit the bathymetric data on SonarWiz5. Moreover, the three dimensional models were created after importing the data sets in the same coordinate system. In these interpolated surfaces, the details and excavations can be easily seen on the surface models. In this study, the profiles are plotted on the surface models to compare the sensors and details on the seabed. It is shown that single beam sonar might miss some details, such as pipeline and quick elevation changes on the seabed when we compare to the side scan sonar of IVER3 because the single side scan sonar can acquire better resolution. However, sometimes using single beam sonar can save your project time and money because the single beam sonar is cheaper than side scan sonars and the processing might be easier than the side scan data.
Resumo:
In this paper, we investigate content-centric data transmission in the context of short opportunistic contacts and base our work on an existing content-centric networking architecture. In case of short interconnection times, file transfers may not be completed and the received information is discarded. Caches in content-centric networks are used for short-term storage and do not guarantee persistence. We implemented a mechanism to extend caching on persistent storage enabling the completion of disrupted content transfers. The mechanisms have been implemented in the CCNx framework and have been evaluated on wireless mesh nodes. Our evaluations using multicast and unicast communication show that the implementation can support content transfers in opportunistic environments without significant processing and storing overhead.
Resumo:
The current state of health and biomedicine includes an enormity of heterogeneous data ‘silos’, collected for different purposes and represented differently, that are presently impossible to share or analyze in toto. The greatest challenge for large-scale and meaningful analyses of health-related data is to achieve a uniform data representation for data extracted from heterogeneous source representations. Based upon an analysis and categorization of heterogeneities, a process for achieving comparable data content by using a uniform terminological representation is developed. This process addresses the types of representational heterogeneities that commonly arise in healthcare data integration problems. Specifically, this process uses a reference terminology, and associated "maps" to transform heterogeneous data to a standard representation for comparability and secondary use. The capture of quality and precision of the “maps” between local terms and reference terminology concepts enhances the meaning of the aggregated data, empowering end users with better-informed queries for subsequent analyses. A data integration case study in the domain of pediatric asthma illustrates the development and use of a reference terminology for creating comparable data from heterogeneous source representations. The contribution of this research is a generalized process for the integration of data from heterogeneous source representations, and this process can be applied and extended to other problems where heterogeneous data needs to be merged.
Resumo:
OBJECTIVE: To determine whether algorithms developed for the World Wide Web can be applied to the biomedical literature in order to identify articles that are important as well as relevant. DESIGN AND MEASUREMENTS A direct comparison of eight algorithms: simple PubMed queries, clinical queries (sensitive and specific versions), vector cosine comparison, citation count, journal impact factor, PageRank, and machine learning based on polynomial support vector machines. The objective was to prioritize important articles, defined as being included in a pre-existing bibliography of important literature in surgical oncology. RESULTS Citation-based algorithms were more effective than noncitation-based algorithms at identifying important articles. The most effective strategies were simple citation count and PageRank, which on average identified over six important articles in the first 100 results compared to 0.85 for the best noncitation-based algorithm (p < 0.001). The authors saw similar differences between citation-based and noncitation-based algorithms at 10, 20, 50, 200, 500, and 1,000 results (p < 0.001). Citation lag affects performance of PageRank more than simple citation count. However, in spite of citation lag, citation-based algorithms remain more effective than noncitation-based algorithms. CONCLUSION Algorithms that have proved successful on the World Wide Web can be applied to biomedical information retrieval. Citation-based algorithms can help identify important articles within large sets of relevant results. Further studies are needed to determine whether citation-based algorithms can effectively meet actual user information needs.
Resumo:
Information overload is a significant problem for modern medicine. Searching MEDLINE for common topics often retrieves more relevant documents than users can review. Therefore, we must identify documents that are not only relevant, but also important. Our system ranks articles using citation counts and the PageRank algorithm, incorporating data from the Science Citation Index. However, citation data is usually incomplete. Therefore, we explore the relationship between the quantity of citation information available to the system and the quality of the result ranking. Specifically, we test the ability of citation count and PageRank to identify "important articles" as defined by experts from large result sets with decreasing citation information. We found that PageRank performs better than simple citation counts, but both algorithms are surprisingly robust to information loss. We conclude that even an incomplete citation database is likely to be effective for importance ranking.
Resumo:
Information overload is a significant problem for modern medicine. Searching MEDLINE for common topics often retrieves more relevant documents than users can review. Therefore, we must identify documents that are not only relevant, but also important. Our system ranks articles using citation counts and the PageRank algorithm, incorporating data from the Science Citation Index. However, citation data is usually incomplete. Therefore, we explore the relationship between the quantity of citation information available to the system and the quality of the result ranking. Specifically, we test the ability of citation count and PageRank to identify "important articles" as defined by experts from large result sets with decreasing citation information. We found that PageRank performs better than simple citation counts, but both algorithms are surprisingly robust to information loss. We conclude that even an incomplete citation database is likely to be effective for importance ranking.
Resumo:
People often use tools to search for information. In order to improve the quality of an information search, it is important to understand how internal information, which is stored in user’s mind, and external information, represented by the interface of tools interact with each other. How information is distributed between internal and external representations significantly affects information search performance. However, few studies have examined the relationship between types of interface and types of search task in the context of information search. For a distributed information search task, how data are distributed, represented, and formatted significantly affects the user search performance in terms of response time and accuracy. Guided by UFuRT (User, Function, Representation, Task), a human-centered process, I propose a search model, task taxonomy. The model defines its relationship with other existing information models. The taxonomy clarifies the legitimate operations for each type of search task of relation data. Based on the model and taxonomy, I have also developed prototypes of interface for the search tasks of relational data. These prototypes were used for experiments. The experiments described in this study are of a within-subject design with a sample of 24 participants recruited from the graduate schools located in the Texas Medical Center. Participants performed one-dimensional nominal search tasks over nominal, ordinal, and ratio displays, and searched one-dimensional nominal, ordinal, interval, and ratio tasks over table and graph displays. Participants also performed the same task and display combination for twodimensional searches. Distributed cognition theory has been adopted as a theoretical framework for analyzing and predicting the search performance of relational data. It has been shown that the representation dimensions and data scales, as well as the search task types, are main factors in determining search efficiency and effectiveness. In particular, the more external representations used, the better search task performance, and the results suggest the ideal search performance occurs when the question type and corresponding data scale representation match. The implications of the study lie in contributing to the effective design of search interface for relational data, especially laboratory results, which are often used in healthcare activities.
Resumo:
Intensity modulated radiation therapy (IMRT) is a technique that delivers a highly conformal dose distribution to a target volume while attempting to maximally spare the surrounding normal tissues. IMRT is a common treatment modality used for treating head and neck (H&N) cancers, and the presence of many critical structures in this region requires accurate treatment delivery. The Radiological Physics Center (RPC) acts as both a remote and on-site quality assurance agency that credentials institutions participating in clinical trials. To date, about 30% of all IMRT participants have failed the RPC’s remote audit using the IMRT H&N phantom. The purpose of this project is to evaluate possible causes of H&N IMRT delivery errors observed by the RPC, specifically IMRT treatment plan complexity and the use of improper dosimetry data from machines that were thought to be matched but in reality were not. Eight H&N IMRT plans with a range of complexity defined by total MU (1460-3466), number of segments (54-225), and modulation complexity scores (MCS) (0.181-0.609) were created in Pinnacle v.8m. These plans were delivered to the RPC’s H&N phantom on a single Varian Clinac. One of the IMRT plans (1851 MU, 88 segments, and MCS=0.469) was equivalent to the median H&N plan from 130 previous RPC H&N phantom irradiations. This average IMRT plan was also delivered on four matched Varian Clinac machines and the dose distribution calculated using a different 6MV beam model. Radiochromic film and TLD within the phantom were used to analyze the dose profiles and absolute doses, respectively. The measured and calculated were compared to evaluate the dosimetric accuracy. All deliveries met the RPC acceptance criteria of ±7% absolute dose difference and 4 mm distance-to-agreement (DTA). Additionally, gamma index analysis was performed for all deliveries using a ±7%/4mm and ±5%/3mm criteria. Increasing the treatment plan complexity by varying the MU, number of segments, or varying the MCS resulted in no clear trend toward an increase in dosimetric error determined by the absolute dose difference, DTA, or gamma index. Varying the delivery machines as well as the beam model (use of a Clinac 6EX 6MV beam model vs. Clinac 21EX 6MV model), also did not show any clear trend towards an increased dosimetric error using the same criteria indicated above.