923 resultados para adjustable moment and rotation speed
Resumo:
The focus of this work is to develop the knowledge of prediction of the physical and chemical properties of processed linear low density polyethylene (LLDPE)/graphene nanoplatelets composites. Composites made from LLDPE reinforced with 1, 2, 4, 6, 8, and 10 wt% grade C graphene nanoplatelets (C-GNP) were processed in a twin screw extruder with three different screw speeds and feeder speeds (50, 100, and 150 rpm). These applied conditions are used to optimize the following properties: thermal conductivity, crystallization temperature, degradation temperature, and tensile strength while prediction of these properties was done through artificial neural network (ANN). The three first properties increased with increase in both screw speed and C-GNP content. The tensile strength reached a maximum value at 4 wt% C-GNP and a speed of 150 rpm as this represented the optimum condition for the stress transfer through the amorphous chains of the matrix to the C-GNP. ANN can be confidently used as a tool to predict the above material properties before investing in development programs and actual manufacturing, thus significantly saving money, time, and effort.
Resumo:
A large eddy simulation is performed to study the deflagration to detonation transition phenomenon in an obstructed channel containing premixed stoichiometric hydrogen–air mixture. Two-dimensional filtered reactive Navier–Stokes equations are solved utilizing the artificially thickened flame approach (ATF) for modeling sub-grid scale combustion. To include the effect of induction time, a 27-step detailed mechanism is utilized along with an in situ adaptive tabulation (ISAT) method to reduce the computational cost due to the detailed chemistry. The results show that in the slow flame propagation regime, the flame–vortex interaction and the resulting flame folding and wrinkling are the main mechanisms for the increase of the flame surface and consequently acceleration of the flame. Furthermore, at high speed, the major mechanisms responsible for flame propagation are repeated reflected shock–flame interactions and the resulting baroclinic vorticity. These interactions intensify the rate of heat release and maintain the turbulence and flame speed at high level. During the flame acceleration, it is seen that the turbulent flame enters the ‘thickened reaction zones’ regime. Therefore, it is necessary to utilize the chemistry based combustion model with detailed chemical kinetics to properly capture the salient features of the fast deflagration propagation.
Resumo:
The objective of this paper is to perform a quantitative comparison of Dweet.io and SensibleThings from different aspects. With the fast development of internet of things, the platforms for internet-of-things face bigger challenges. This paper will evaluate both systems in four parts. The first part shows the general comparison of input ways and output functions provided by the platforms. The second part shows the security comparison, which focuses on the protocol types of the packets and the stability during the communication. The third part shows the scalability comparison when the value becomes bigger. The fourth part shows the scalability comparison when speeding up the processes. After the comparisons, I concluded that Dweet.io is more easy to use on devices and supports more programming languages. Dweet.io realizes visualization and it can be shared. Dweet.io is safer and more stable than SensibleThings. SensibleThings provides more openness. SensibleThings has better scalability in handling big values and quick speed.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The overwhelming amount and unprecedented speed of publication in the biomedical domain make it difficult for life science researchers to acquire and maintain a broad view of the field and gather all information that would be relevant for their research. As a response to this problem, the BioNLP (Biomedical Natural Language Processing) community of researches has emerged and strives to assist life science researchers by developing modern natural language processing (NLP), information extraction (IE) and information retrieval (IR) methods that can be applied at large-scale, to scan the whole publicly available biomedical literature and extract and aggregate the information found within, while automatically normalizing the variability of natural language statements. Among different tasks, biomedical event extraction has received much attention within BioNLP community recently. Biomedical event extraction constitutes the identification of biological processes and interactions described in biomedical literature, and their representation as a set of recursive event structures. The 2009–2013 series of BioNLP Shared Tasks on Event Extraction have given raise to a number of event extraction systems, several of which have been applied at a large scale (the full set of PubMed abstracts and PubMed Central Open Access full text articles), leading to creation of massive biomedical event databases, each of which containing millions of events. Sinece top-ranking event extraction systems are based on machine-learning approach and are trained on the narrow-domain, carefully selected Shared Task training data, their performance drops when being faced with the topically highly varied PubMed and PubMed Central documents. Specifically, false-positive predictions by these systems lead to generation of incorrect biomolecular events which are spotted by the end-users. This thesis proposes a novel post-processing approach, utilizing a combination of supervised and unsupervised learning techniques, that can automatically identify and filter out a considerable proportion of incorrect events from large-scale event databases, thus increasing the general credibility of those databases. The second part of this thesis is dedicated to a system we developed for hypothesis generation from large-scale event databases, which is able to discover novel biomolecular interactions among genes/gene-products. We cast the hypothesis generation problem as a supervised network topology prediction, i.e predicting new edges in the network, as well as types and directions for these edges, utilizing a set of features that can be extracted from large biomedical event networks. Routine machine learning evaluation results, as well as manual evaluation results suggest that the problem is indeed learnable. This work won the Best Paper Award in The 5th International Symposium on Languages in Biology and Medicine (LBM 2013).
Resumo:
Environmental impacts of wind energy facilities increasingly cause concern, a central issue being bats and birds killed by rotor blades. Two approaches have been employed to assess collision rates: carcass searches and surveys of animals prone to collisions. Carcass searches can provide an estimate for the actual number of animals being killed but they offer little information on the relation between collision rates and, for example, weather parameters due to the time of death not being precisely known. In contrast, a density index of animals exposed to collision is sufficient to analyse the parameters influencing the collision rate. However, quantification of the collision rate from animal density indices (e.g. acoustic bat activity or bird migration traffic rates) remains difficult. We combine carcass search data with animal density indices in a mixture model to investigate collision rates. In a simulation study we show that the collision rates estimated by our model were at least as precise as conventional estimates based solely on carcass search data. Furthermore, if certain conditions are met, the model can be used to predict the collision rate from density indices alone, without data from carcass searches. This can reduce the time and effort required to estimate collision rates. We applied the model to bat carcass search data obtained at 30 wind turbines in 15 wind facilities in Germany. We used acoustic bat activity and wind speed as predictors for the collision rate. The model estimates correlated well with conventional estimators. Our model can be used to predict the average collision rate. It enables an analysis of the effect of parameters such as rotor diameter or turbine type on the collision rate. The model can also be used in turbine-specific curtailment algorithms that predict the collision rate and reduce this rate with a minimal loss of energy production.
Resumo:
One key step of the industrial development of a tidal energy device is the testing of scale prototype devices within a controlled laboratory environment. At present, there is no available experimental protocol which addresses in a quantitative manner the differences which can be expected between results obtained from the different types of facilities currently employed for this type of testing. As a consequence, where differences between results are found it has been difficult to confirm the extent to which these differences relate to the device performance or to the test facility type. In the present study, a comparative ”Round Robin” testing programme has been conducted as part of the EC FP VII MaRINET program in order to evaluate the impact of different experimental facilities on the test results. The aim of the trials was to test the same model tidal turbine in four different test facilities to explore the sensitivity of the results to the choice of facility. The facilities comprised two towing tanks, of very different size, and two circulating water channels. Performance assessments in terms of torque, drag and inflow speed showed very similar results in all facilities. However, expected differences between the different tank types (circulating and towing) were observed in the fluctuations of torque and drag measurements. The main facility parameters which can influence the behaviour of the turbine were identified; in particular the effect of blockage was shown to be significant in cases yielding for high thrust coefficients, even at relatively small blockage ratios.
Resumo:
Small particles and their dynamics are of widespread interest due both to their unique properties and their ubiquity. Here, we investigate several classes of small particles: colloids, polymers, and liposomes. All these particles, due to their size on the order of microns, exhibit significant similarity in that they are large enough to be visualized in microscopes, but small enough to be significantly influenced by thermal (or Brownian) motion. Further, similar optical microscopy and experimental techniques are commonly employed to investigate all these particles. In this work, we develop single particle tracking techniques, which allow thorough characterization of individual particle dynamics, observing many behaviors which would be overlooked by methods which time or ensemble average. The various particle systems are also similar in that frequently, the signal-to-noise ratio represented a significant concern. In many cases, development of image analysis and particle tracking methods optimized to low signal-to-noise was critical to performing experimental observations. The simplest particles studied, in terms of their interaction potentials, were chemically homogeneous (though optically anisotropic) hard-sphere colloids. Using these spheres, we explored the comparatively underdeveloped conjunction of translation and rotation and particle hydrodynamics. Developing off this, the dynamics of clusters of spherical colloids were investigated, exploring how shape anisotropy influences the translation and rotation respectively. Transitioning away from uniform hard-sphere potentials, the interactions of amphiphilic colloidal particles were explored, observing the effects of hydrophilic and hydrophobic interactions upon pattern assembly and inter-particle dynamics. Interaction potentials were altered in a different fashion by working with suspensions of liposomes, which, while homogeneous, introduce the possibility of deformation. Even further degrees of freedom were introduced by observing the interaction of particles and then polymers within polymer suspensions or along lipid tubules. Throughout, while examination of the trajectories revealed that while by some measures, the averaged behaviors accorded with expectation, often closer examination made possible by single particle tracking revealed novel and unexpected phenomena.
Resumo:
The off-cycle refrigerant mass migration has a direct influence on the on-cycle performance since compressor energy is necessary to redistribute the refrigerant mass. No studies, as of today, are available in the open literature which experimentally measured the lubricant migration within a refrigeration system during cycling or stop/start transients. Therefore, experimental procedures measuring the refrigerant and lubricant migration through the major components of a refrigeration system during stop/start transients were developed and implemented. Results identifying the underlying physics are presented. The refrigerant and lubricant migration of an R134a automotive A/C system-utilizing a fixed orifice tube, minichannel condenser, plate and fin evaporator, U-tube type accumulator and fixed displacement compressor-was measured across five sections divided by ball valves. Using the Quick-Closing Valve Technique (QCVT) combined with the Remove and Weigh Technique (RWT) using liquid nitrogen as the condensing agent resulted in a measurement uncertainty of 0.4 percent regarding the total refrigerant mass in the system. The determination of the lubricant mass distribution was achieved by employing three different techniques-Remove and Weigh, Mix and Sample, and Flushing. To employ the Mix and Sample Technique a device-called the Mix and Sample Device-was built. A method to separate the refrigerant and lubricant was developed with an accuracy-after separation-of 0.04 grams of refrigerant left in the lubricant. When applying the three techniques, the total amount of lubricant mass in the system was determined to within two percent. The combination of measurement results-infrared photography and high speed and real time videography-provide unprecedented insight into the mechanisms of refrigerant and lubricant migration during stop-start operation. During the compressor stop period, the primary refrigerant mass migration is caused by, and follows, the diminishing pressure difference across the expansion device. The secondary refrigerant migration is caused by a pressure gradient as a result of thermal nonequilibrium within the system and causes only vapor phase refrigerant migration. Lubricant migration is proportional to the refrigerant mass during the primary refrigerant mass migration. During the secondary refrigerant mass migration lubricant is not migrating. The start-up refrigerant mass migration is caused by an imbalance of the refrigerant mass flow rates across the compressor and expansion device. The higher compressor refrigerant mass flow rate was a result of the entrainment of foam into the U-tube of the accumulator. The lubricant mass migration during the start-up was not proportional to the refrigerant mass migration. The presence of water condensate on the evaporator affected the refrigerant mass migration during the compressor stop period. Caused by an evaporative cooling effect the evaporator held 56 percent of the total refrigerant mass in the system after three minutes of compressor stop time-compared to 25 percent when no water condensate was present on the evaporator coil. Foam entrainment led to a faster lubricant and refrigerant mass migration out of the accumulator than liquid entrainment through the hole at the bottom of the U-tube. The latter was observed for when water condensate was present on the evaporator coil because-as a result of the higher amount of refrigerant mass in the evaporator before start-up-the entrainment of foam into the U-tube of the accumulator ceased before the steady state refrigerant mass distribution was reached.
Resumo:
Wind-generated waves in the Kara, Laptev, and East-Siberian Seas are investigated using altimeter data from Envisat RA-2 and SARAL-AltiKa. Only isolated ice-free zones had been selected for analysis. Wind seas can be treated as pure wind-generated waves without any contamination by ambient swell. Such zones were identified using ice concentration data from microwave radiometers. Altimeter data, both significant wave height (SWH) and wind speed, for these areas were further obtained for the period 2002-2012 using Envisat RA-2 measurements, and for 2013 using SARAL-AltiKa. Dependencies of dimensionless SWH and wavelength on dimensionless wave generation spatial scale are compared to known empirical dependencies for fetch-limited wind wave development. We further check sensitivity of Ka- and Ku-band and discuss new possibilities that AltiKa's higher resolution can open.
Resumo:
Cranial cruciate ligament (CCL) deficiency is the leading cause of lameness affecting the stifle joints of large breed dogs, especially Labrador Retrievers. Although CCL disease has been studied extensively, its exact pathogenesis and the primary cause leading to CCL rupture remain controversial. However, weakening secondary to repetitive microtrauma is currently believed to cause the majority of CCL instabilities diagnosed in dogs. Techniques of gait analysis have become the most productive tools to investigate normal and pathological gait in human and veterinary subjects. The inverse dynamics analysis approach models the limb as a series of connected linkages and integrates morphometric data to yield information about the net joint moment, patterns of muscle power and joint reaction forces. The results of these studies have greatly advanced our understanding of the pathogenesis of joint diseases in humans. A muscular imbalance between the hamstring and quadriceps muscles has been suggested as a cause for anterior cruciate ligament rupture in female athletes. Based on these findings, neuromuscular training programs leading to a relative risk reduction of up to 80% has been designed. In spite of the cost and morbidity associated with CCL disease and its management, very few studies have focused on the inverse dynamics gait analysis of this condition in dogs. The general goals of this research were (1) to further define gait mechanism in Labrador Retrievers with and without CCL-deficiency, (2) to identify individual dogs that are susceptible to CCL disease, and (3) to characterize their gait. The mass, location of the center of mass (COM), and mass moment of inertia of hind limb segments were calculated using a noninvasive method based on computerized tomography of normal and CCL-deficient Labrador Retrievers. Regression models were developed to determine predictive equations to estimate body segment parameters on the basis of simple morphometric measurements, providing a basis for nonterminal studies of inverse dynamics of the hind limbs in Labrador Retrievers. Kinematic, ground reaction forces (GRF) and morphometric data were combined in an inverse dynamics approach to compute hock, stifle and hip net moments, powers and joint reaction forces (JRF) while trotting in normal, CCL-deficient or sound contralateral limbs. Reductions in joint moment, power, and loads observed in CCL-deficient limbs were interpreted as modifications adopted to reduce or avoid painful mobilization of the injured stifle joint. Lameness resulting from CCL disease affected predominantly reaction forces during the braking phase and the extension during push-off. Kinetics also identified a greater joint moment and power of the contralateral limbs compared with normal, particularly of the stifle extensor muscles group, which may correlate with the lameness observed, but also with the predisposition of contralateral limbs to CCL deficiency in dogs. For the first time, surface EMG patterns of major hind limb muscles during trotting gait of healthy Labrador Retrievers were characterized and compared with kinetic and kinematic data of the stifle joint. The use of surface EMG highlighted the co-contraction patterns of the muscles around the stifle joint, which were documented during transition periods between flexion and extension of the joint, but also during the flexion observed in the weight bearing phase. Identification of possible differences in EMG activation characteristics between healthy patients and dogs with or predisposed to orthopedic and neurological disease may help understanding the neuromuscular abnormality and gait mechanics of such disorders in the future. Conformation parameters, obtained from femoral and tibial radiographs, hind limb CT images, and dual-energy X-ray absorptiometry, of hind limbs predisposed to CCL deficiency were compared with the conformation parameters from hind limbs at low risk. A combination of tibial plateau angle and femoral anteversion angle measured on radiographs was determined optimal for discriminating predisposed and non-predisposed limbs for CCL disease in Labrador Retrievers using a receiver operating characteristic curve analysis method. In the future, the tibial plateau angle (TPA) and femoral anteversion angle (FAA) may be used to screen dogs suspected of being susceptible to CCL disease. Last, kinematics and kinetics across the hock, stifle and hip joints in Labrador Retrievers presumed to be at low risk based on their radiographic TPA and FAA were compared to gait data from dogs presumed to be predisposed to CCL disease for overground and treadmill trotting gait. For overground trials, extensor moment at the hock and energy generated around the hock and stifle joints were increased in predisposed limbs compared to non predisposed limbs. For treadmill trials, dogs qualified as predisposed to CCL disease held their stifle at a greater degree of flexion, extended their hock less, and generated more energy around the stifle joints while trotting on a treadmill compared with dogs at low risk. This characterization of the gait mechanics of Labrador Retrievers at low risk or predisposed to CCL disease may help developing and monitoring preventive exercise programs to decrease gastrocnemius dominance and strengthened the hamstring muscle group.
Resumo:
O fogo é um processo frequente nas paisagens do norte de Portugal. Estudos anteriores mostraram que os bosques de azinheira (Quercus rotundifolia) persistem após a passagem do fogo e ajudam a diminuir a sua intensidade e taxa de propagação. Os principais objetivos deste estudo foram compreender e modelar o efeito dos bosques de azinheira no comportamento do fogo ao nível da paisagem da bacia superior do rio Sabor, localizado no nordeste de Portugal. O impacto dos bosques de azinheira no comportamento do fogo foi testado em termos de área e configuração de acordo com cenários que simulam a possível distribuição destas unidades de vegetação na paisagem, considerando uma percentagem de ocupação da azinheira de 2.2% (Low), 18.1% (Moderate), 26.0% (High), e 39.8% (Rivers). Estes cenários tiveram como principal objetivo testar 1) o papel dos bosques de azinheira no comportamento do fogo e 2) de que forma a configuração das manchas de azinheira podem ajudar a diminuir a intensidade da linha de fogo e área ardida. Na modelação do comportamento do fogo foi usado o modelo FlamMap para simular a intensidade de linha do fogo e taxa de propagação do fogo com base em modelos de combustível associados a cada ocupação e uso do solo presente na área de estudo, e também com base em fatores topográficos (altitude, declive e orientação da encosta) e climáticos (humidade e velocidade do vento). Foram ainda usados dois modelos de combustível para a ocupação de azinheira (áreas interiores e de bordadura), desenvolvidos com base em dados reais obtidos na região. Usou-se o software FRAGSATS para a análise dos padrões espaciais das classes de intensidade de linha do fogo, usando-se as métricas Class Area (CA), Number of Patches (NP) e Large Patches Index (LPI). Os resultados obtidos indicaram que a intensidade da linha de fogo e a taxa de propagação do fogo variou entre cenários e entre modelos de combustível para o azinhal. A intensidade média da linha de fogo e a taxa média de propagação do fogo decresceu à medida que a percentagem de área de bosques de azinheira aumentou na paisagem. Também foi observado que as métricas CA, NP e LPI variaram entre cenários e modelos de combustível para o azinhal, decrescendo quando a percentagem de área de bosques de azinheira aumentou. Este estudo permitiu concluir que a variação da percentagem de ocupação e configuração espacial dos bosques de azinheira influenciam o comportamento do fogo, reduzindo, em termos médios, a intensidade da linha de fogo e a taxa de propagação, sugerindo que os bosques de azinhal podem ser usados como medidas silvícolas preventivas para diminuir o risco de incêndio nesta região.
Resumo:
Sea- level variations have a significant impact on coastal areas. Prediction of sea level variations expected from the pre most critical information needs associated with the sea environment. For this, various methods exist. In this study, on the northern coast of the Persian Gulf have been studied relation to the effectiveness of parameters such as pressure, temperature and wind speed on sea leve and associated with global parameters such as the North Atlantic Oscillation index and NAO index and present statistic models for prediction of sea level. In the next step by using artificial neural network predict sea level for first in this region. Then compared results of the models. Prediction using statistical models estimated in terms correlation coefficient R = 0.84 and root mean square error (RMS) 21.9 cm for the Bushehr station, and R = 0.85 and root mean square error (RMS) 48.4 cm for Rajai station, While neural network used to have 4 layers and each middle layer six neurons is best for prediction and produces the results reliably in terms of correlation coefficient with R = 0.90126 and the root mean square error (RMS) 13.7 cm for the Bushehr station, and R = 0.93916 and the root mean square error (RMS) 22.6 cm for Rajai station. Therefore, the proposed methodology could be successfully used in the study area.
Resumo:
This paper presents the determination of a mean solar radiation year and of a typical meteorological year for the region of Funchal in the Madeira Island, Portugal. The data set includes hourly mean and extreme values for air temperature, relative humidity and wind speed and hourly mean values for solar global and diffuse radiation for the period 2004-2014, with maximum data coverage of 99.7%. The determination of the mean solar radiation year consisted, in a first step, in the average of all values for each pair hour/day and, in a second step, in the application of a five days centred moving average of hourly values. The determination of the typical meteorological year was based on Finkelstein-Schafer statistics, which allows to obtain a complete year of real measurements through the selection and combination of typical months, preserving the long term averages while still allowing the analysis of short term events. The typical meteorological year validation was carried out through the comparison of the monthly averages for the typical year with the long term monthly averages. The values obtained were very close, so that the typical meteorological year can accurately represent the long term data series. The typical meteorological year can be used in the simulation of renewable energy systems, namely solar energy systems, and for predicting the energy performance of buildings.
Resumo:
In the standard Vehicle Routing Problem (VRP), we route a fleet of vehicles to deliver the demands of all customers such that the total distance traveled by the fleet is minimized. In this dissertation, we study variants of the VRP that minimize the completion time, i.e., we minimize the distance of the longest route. We call it the min-max objective function. In applications such as disaster relief efforts and military operations, the objective is often to finish the delivery or the task as soon as possible, not to plan routes with the minimum total distance. Even in commercial package delivery nowadays, companies are investing in new technologies to speed up delivery instead of focusing merely on the min-sum objective. In this dissertation, we compare the min-max and the standard (min-sum) objective functions in a worst-case analysis to show that the optimal solution with respect to one objective function can be very poor with respect to the other. The results motivate the design of algorithms specifically for the min-max objective. We study variants of min-max VRPs including one problem from the literature (the min-max Multi-Depot VRP) and two new problems (the min-max Split Delivery Multi-Depot VRP with Minimum Service Requirement and the min-max Close-Enough VRP). We develop heuristics to solve these three problems. We compare the results produced by our heuristics to the best-known solutions in the literature and find that our algorithms are effective. In the case where benchmark instances are not available, we generate instances whose near-optimal solutions can be estimated based on geometry. We formulate the Vehicle Routing Problem with Drones and carry out a theoretical analysis to show the maximum benefit from using drones in addition to trucks to reduce delivery time. The speed-up ratio depends on the number of drones loaded onto one truck and the speed of the drone relative to the speed of the truck.