949 resultados para Interior One Flange Load Case
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
Conventional hedonic techniques for estimating the value of local amenities rely on the assumption that households move freely among locations. We show that when moving is costly, the variation in housing prices and wages across locations may no longer reflect the value of differences in local amenities. We develop an alternative discrete-choice approach that models the household location decision directly, and we apply it to the case of air quality in US metro areas in 1990 and 2000. Because air pollution is likely to be correlated with unobservable local characteristics such as economic activity, we instrument for air quality using the contribution of distant sources to local pollution-excluding emissions from local sources, which are most likely to be correlated with local conditions. Our model yields an estimated elasticity of willingness to pay with respect to air quality of 0.34-0.42. These estimates imply that the median household would pay $149-$185 (in constant 1982-1984 dollars) for a one-unit reduction in average ambient concentrations of particulate matter. These estimates are three times greater than the marginal willingness to pay estimated by a conventional hedonic model using the same data. Our results are robust to a range of covariates, instrumenting strategies, and functional form assumptions. The findings also confirm the importance of instrumenting for local air pollution. © 2009 Elsevier Inc. All rights reserved.
Resumo:
Knowing one's HIV status is particularly important in the setting of recent tuberculosis (TB) exposure. Blood tests for assessment of tuberculosis infection, such as the QuantiFERON Gold in-tube test (QFT; Cellestis Limited, Carnegie, Victoria, Australia), offer the possibility of simultaneous screening for TB and HIV with a single blood draw. We performed a cross-sectional analysis of all contacts to a highly infectious TB case in a large meatpacking factory. Twenty-two percent were foreign-born and 73% were black. Contacts were tested with both tuberculin skin testing (TST) and QFT. HIV testing was offered on an opt-out basis. Persons with TST >or=10 mm, positive QFT, and/or positive HIV test were offered latent TB treatment. Three hundred twenty-six contacts were screened: TST results were available for 266 people and an additional 24 reported a prior positive TST for a total of 290 persons with any TST result (89.0%). Adequate QFT specimens were obtained for 312 (95.7%) of persons. Thirty-two persons had QFT results but did not return for TST reading. Twenty-two percent met the criteria for latent TB infection. Eighty-eight percent accepted HIV testing. Two (0.7%) were HIV seropositive; both individuals were already aware of their HIV status, but one had stopped care a year previously. None of the HIV-seropositive persons had latent TB, but all were offered latent TB treatment per standard guidelines. This demonstrates that opt-out HIV testing combined with QFT in a large TB contact investigation was feasible and useful. HIV testing was also widely accepted. Pairing QFT with opt-out HIV testing should be strongly considered when possible.
Resumo:
When solid material is removed in order to create flow channels in a load carrying structure, the strength of the structure decreases. On the other hand, a structure with channels is lighter and easier to transport as part of a vehicle. Here, we show that this trade off can be used for benefit, to design a vascular mechanical structure. When the total amount of solid is fixed and the sizes, shapes, and positions of the channels can vary, it is possible to morph the flow architecture such that it endows the mechanical structure with maximum strength. The result is a multifunctional structure that offers not only mechanical strength but also new capabilities necessary for volumetric functionalities such as self-healing and self-cooling. We illustrate the generation of such designs for strength and fluid flow for several classes of vasculatures: parallel channels, trees with one, two, and three bifurcation levels. The flow regime in every channel is laminar and fully developed. In each case, we found that it is possible to select not only the channel dimensions but also their positions such that the entire structure offers more strength and less flow resistance when the total volume (or weight) and the total channel volume are fixed. We show that the minimized peak stress is smaller when the channel volume (φ) is smaller and the vasculature is more complex, i.e., with more levels of bifurcation. Diminishing returns are reached in both directions, decreasing φ and increasing complexity. For example, when φ=0.02 the minimized peak stress of a design with one bifurcation level is only 0.2% greater than the peak stress in the optimized vascular design with two levels of bifurcation. © 2010 American Institute of Physics.
Resumo:
A female patient, with normal familial history, developed at the age of 30 months an episode of diarrhoea, vomiting and lethargy which resolved spontaneously. At the age of 3 years, the patient re-iterated vomiting, was sub-febrile and hypoglycemic, fell into coma, developed seizures and sequels involving right hemi-body. Urinary excretion of hexanoylglycine and suberylglycine was low during this metabolic decompensation. A study of pre- and post-prandial blood glucose and ketones over a period of 24 hours showed a normal glycaemic cycle but a failure to form ketones after 12 hours fasting, suggesting a mitochondrial β-oxidation defect. Total blood carnitine was lowered with unesterified carnitine being half of the lowest control value. A diagnosis of mild MCAD deficiency (MCADD) was based on rates of 1-14C-octanoate and 9, 10-3H-myristate oxidation and of octanoyl-CoA dehydrogenase being reduced to 25% of control values. Other mitochondrial fatty acid oxidation proteins were functionally normal. De novo acylcarnitine synthesis in whole blood samples incubated with deuterated palmitate was also typical of MCADD. Genetic studies showed that the patient was compound heterozygous with a sequence variation in both of the two ACADM alleles; one had the common c.985A>G mutation and the other had a novel c.145C>G mutation. This is the first report for the ACADM gene c.145C>G mutation: it is located in exon 3 and causes a replacement of glutamine to glutamate at position 24 of the mature protein (Q24E). Associated with heterozygosity for c.985A>G mutation, this mutation is responsible for a mild MCADD phenotype along with a clinical story corroborating the emerging literature view that patients with genotypes representing mild MCADD (high residual enzyme activity and low urinary levels of glycine conjugates), similar to some of the mild MCADDs detected by MS/MS newborn screening, may be at risk for disease presentation.
Resumo:
BACKGROUND: Disclosure of authors' financial interests has been proposed as a strategy for protecting the integrity of the biomedical literature. We examined whether authors' financial interests were disclosed consistently in articles on coronary stents published in 2006. METHODOLOGY/PRINCIPAL FINDINGS: We searched PubMed for English-language articles published in 2006 that provided evidence or guidance regarding the use of coronary artery stents. We recorded article characteristics, including information about authors' financial disclosures. The main outcome measures were the prevalence, nature, and consistency of financial disclosures. There were 746 articles, 2985 authors, and 135 journals in the database. Eighty-three percent of the articles did not contain disclosure statements for any author (including declarations of no interests). Only 6% of authors had an article with a disclosure statement. In comparisons between articles by the same author, the types of disagreement were as follows: no disclosure statements vs declarations of no interests (64%); specific disclosures vs no disclosure statements (34%); and specific disclosures vs declarations of no interests (2%). Among the 75 authors who disclosed at least 1 relationship with an organization, there were 2 cases (3%) in which the organization was disclosed in every article the author wrote. CONCLUSIONS/SIGNIFICANCE: In the rare instances when financial interests were disclosed, they were not disclosed consistently, suggesting that there are problems with transparency in an area of the literature that has important implications for patient care. Our findings suggest that the inconsistencies we observed are due to both the policies of journals and the behavior of some authors.
Resumo:
Amnesia typically results from trauma to the medial temporal regions that coordinate activation among the disparate areas of cortex that represent the information that make up autobiographical memories. We proposed that amnesia should also result from damage to these regions, particularly regions that subserve long-term visual memory [Rubin, D. C., & Greenberg, D. L. (1998). Visual memory-deficit amnesia: A distinct amnesic presentation and etiology. Proceedings of the National Academy of Sciences of the USA, 95, 5413-5416]. We previously found 11 such cases in the literature, and all 11 had amnesia. We now present a detailed investigation of one of these patients. M.S. suffers from long-term visual memory loss along with some semantic deficits; he also manifests a severe retrograde amnesia and moderate anterograde amnesia. The presentation of his amnesia differs from that of the typical medial-temporal or lateral-temporal amnesic; we suggest that his visual deficits may be contributing to his autobiographical amnesia.
Resumo:
BACKGROUND: Previous mathematical models for hepatic and tissue one-carbon metabolism have been combined and extended to include a blood plasma compartment. We use this model to study how the concentrations of metabolites that can be measured in the plasma are related to their respective intracellular concentrations. METHODS: The model consists of a set of ordinary differential equations, one for each metabolite in each compartment, and kinetic equations for metabolism and for transport between compartments. The model was validated by comparison to a variety of experimental data such as the methionine load test and variation in folate intake. We further extended this model by introducing random and systematic variation in enzyme activity. OUTCOMES AND CONCLUSIONS: A database of 10,000 virtual individuals was generated, each with a quantitatively different one-carbon metabolism. Our population has distributions of folate and homocysteine in the plasma and tissues that are similar to those found in the NHANES data. The model reproduces many other sets of clinical data. We show that tissue and plasma folate is highly correlated, but liver and plasma folate much less so. Oxidative stress increases the plasma S-adenosylmethionine/S-adenosylhomocysteine (SAM/SAH) ratio. We show that many relationships among variables are nonlinear and in many cases we provide explanations. Sampling of subpopulations produces dramatically different apparent associations among variables. The model can be used to simulate populations with polymorphisms in genes for folate metabolism and variations in dietary input.
Resumo:
There has been a significant body of literature on species flock definition but not so much about practical means to appraise them. We here apply the five criteria of Eastman and McCune for detecting species flocks in four taxonomic components of the benthic fauna of the Antarctic shelf: teleost fishes, crinoids (feather stars), echinoids (sea urchins) and crustacean arthropods. Practical limitations led us to prioritize the three historical criteria (endemicity, monophyly, species richness) over the two ecological ones (ecological diversity and habitat dominance). We propose a new protocol which includes an iterative fine-tuning of the monophyly and endemicity criteria in order to discover unsuspected flocks. As a result nine « full » species flocks (fulfilling the five criteria) are briefly described. Eight other flocks fit the three historical criteria but need to be further investigated from the ecological point of view (here called « core flocks »). The approach also shows that some candidate taxonomic components are no species flocks at all. The present study contradicts the paradigm that marine species flocks are rare. The hypothesis according to which the Antarctic shelf acts as a species flocks generator is supported, and the approach indicates paths for further ecological studies and may serve as a starting point to investigate the processes leading to flock-like patterning of biodiversity. © 2013 Lecointre et al.
Resumo:
For pt.I see ibid. vol.3, p.195 (1987). The authors have shown that the resolution of a confocal scanning microscope can be improved by recording the full image at each scanning point and then inverting the data. These analyses were restricted to the case of coherent illumination. They investigate, along similar lines, the incoherent case, which applies to fluorescence microscopy. They investigate the one-dimensional and two-dimensional square-pupil problems and they prove, by means of numerical computations of the singular value spectrum and of the impulse response function, that for a signal-to-noise ratio of, say 10%, it is possible to obtain an improvement of approximately 60% in resolution with respect to the conventional incoherent light confocal microscope. This represents a working bandwidth of 3.5 times the Rayleigh limit.
Resumo:
The paper considers the open shop scheduling problem to minimize the make-span, provided that one of the machines has to process the jobs according to a given sequence. We show that in the preemptive case the problem is polynomially solvable for an arbitrary number of machines. If preemption is not allowed, the problem is NP-hard in the strong sense if the number of machines is variable, and is NP-hard in the ordinary sense in the case of two machines. For the latter case we give a heuristic algorithm that runs in linear time and produces a schedule with the makespan that is at most 5/4 times the optimal value. We also show that the two-machine problem in the nonpreemptive case is solvable in pseudopolynomial time by a dynamic programming algorithm, and that the algorithm can be converted into a fully polynomial approximation scheme. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 705–731, 1998
Resumo:
Parallel computing is now widely used in numerical simulation, particularly for application codes based on finite difference and finite element methods. A popular and successful technique employed to parallelize such codes onto large distributed memory systems is to partition the mesh into sub-domains that are then allocated to processors. The code then executes in parallel, using the SPMD methodology, with message passing for inter-processor interactions. In order to improve the parallel efficiency of an imbalanced structured mesh CFD code, a new dynamic load balancing (DLB) strategy has been developed in which the processor partition range limits of just one of the partitioned dimensions uses non-coincidental limits, as opposed to coincidental limits. The ‘local’ partition limit change allows greater flexibility in obtaining a balanced load distribution, as the workload increase, or decrease, on a processor is no longer restricted by the ‘global’ (coincidental) limit change. The automatic implementation of this generic DLB strategy within an existing parallel code is presented in this chapter, along with some preliminary results.
Resumo:
This paper presents a framework for Historical Case-Based Reasoning (HCBR) which allows the expression of both relative and absolute temporal knowledge, representing case histories in the real world. The formalism is founded on a general temporal theory that accommodates both points and intervals as primitive time elements. A case history is formally defined as a collection of (time-independent) elemental cases, together with its corresponding temporal reference. Case history matching is two-fold, i.e., there are two similarity values need to be computed: the non-temporal similarity degree and the temporal similarity degree. On the one hand, based on elemental case matching, the non-temporal similarity degree between case histories is defined by means of computing the unions and intersections of the involved elemental cases. On the other hand, by means of the graphical presentation of temporal references, the temporal similarity degree in case history matching is transformed into conventional graph similarity measurement.
Resumo:
The traditional approach of dealing with cases from Multiple Case Bases is to map these to one central case base that is used for knowledge extraction and problem solving. Accessing Multiple Case Bases should not require a change to their data structure. This paper presents an investigation into applying Case-Based Reasoning to Multiple Heterogeneous Case Bases. A case study is presented to illustrate and evaluate the approach.
Resumo:
In this paper, we address the use of CBR in collaboration with numerical engineering models. This collaborative combination has a particular application in engineering domains where numerical models are used. We term this domain “Case Based Engineering” (CBE), and present the general architecture of a CBE system. We define and discuss the general characteristics of CBE and the special problems which arise. These are: the handling of engineering constraints of both continuous and nominal kind; interpolation over both continuous and nominal variables, and conformability for interpolation. In order to illustrate the utility of the method proposed, and to provide practical examples of the general theory, the paper describes a practical application of the CBE architecture, known as CBE-CONVEYOR, which has been implemented by the authors.Pneumatic conveying is an important transportation technology in the solid bulks conveying industry. One of the major industry concerns is the attrition of powders and granules during pneumatic conveying. To minimize the fraction of particles during pneumatic conveying, engineers want to know what design parameters they should use in building a conveyor system. To do this, engineers often run simulations in a repetitive manner to find appropriate input parameters. CBE-Conveyor is shown to speed up conventional methods for searching for solutions, and to solve problems directly that would otherwise require considerable intervention from the engineer.