51 resultados para Lot sizing
Resumo:
Current high temperature superconducting (HTS) wires exhibit high current densities enabling their use in electrical rotating machinery. The possibility of designing high power density superconducting motors operating at reasonable temperatures allows for new applications in mobile systems in which size and weight represent key design parameters. Thus, all-electric aircrafts represent a promising application for HTS motors. The design of such a complex system as an aircraft consists of a multi-variable optimization that requires computer models and advanced design procedures. This paper presents a specific sizing model of superconducting propulsion motors to be used in aircraft design. The model also takes into account the cooling system. The requirements for this application are presented in terms of power and dynamics as well as a load profile corresponding to a typical mission. We discuss the design implications of using a superconducting motor on an aircraft as well as the integration of the electrical propulsion in the aircraft, and the scaling laws derived from physics-based modeling of HTS motors.
Resumo:
Background: Identification of the structural domains of proteins is important for our understanding of the organizational principles and mechanisms of protein folding, and for insights into protein function and evolution. Algorithmic methods of dissecting protein of known structure into domains developed so far are based on an examination of multiple geometrical, physical and topological features. Successful as many of these approaches are, they employ a lot of heuristics, and it is not clear whether they illuminate any deep underlying principles of protein domain organization. Other well-performing domain dissection methods rely on comparative sequence analysis. These methods are applicable to sequences with known and unknown structure alike, and their success highlights a fundamental principle of protein modularity, but this does not directly improve our understanding of protein spatial structure.
Resumo:
It is convenient and effective to solve nonlinear problems with a model that has a linear-in-the-parameters (LITP) structure. However, the nonlinear parameters (e.g. the width of Gaussian function) of each model term needs to be pre-determined either from expert experience or through exhaustive search. An alternative approach is to optimize them by a gradient-based technique (e.g. Newton’s method). Unfortunately, all of these methods still need a lot of computations. Recently, the extreme learning machine (ELM) has shown its advantages in terms of fast learning from data, but the sparsity of the constructed model cannot be guaranteed. This paper proposes a novel algorithm for automatic construction of a nonlinear system model based on the extreme learning machine. This is achieved by effectively integrating the ELM and leave-one-out (LOO) cross validation with our two-stage stepwise construction procedure [1]. The main objective is to improve the compactness and generalization capability of the model constructed by the ELM method. Numerical analysis shows that the proposed algorithm only involves about half of the computation of orthogonal least squares (OLS) based method. Simulation examples are included to confirm the efficacy and superiority of the proposed technique.
Resumo:
The quick, easy way to master all the statistics you'll ever need The bad news first: if you want a psychology degree you'll need to know statistics. Now for the good news: Psychology Statistics For Dummies. Featuring jargon-free explanations, step-by-step instructions and dozens of real-life examples, Psychology Statistics For Dummies makes the knotty world of statistics a lot less baffling. Rather than padding the text with concepts and procedures irrelevant to the task, the authors focus only on the statistics psychology students need to know. As an alternative to typical, lead-heavy statistics texts or supplements to assigned course reading, this is one book psychology students won't want to be without. Ease into statistics – start out with an introduction to how statistics are used by psychologists, including the types of variables they use and how they measure them Get your feet wet – quickly learn the basics of descriptive statistics, such as central tendency and measures of dispersion, along with common ways of graphically depicting information Meet your new best friend – learn the ins and outs of SPSS, the most popular statistics software package among psychology students, including how to input, manipulate and analyse data Analyse this – get up to speed on statistical analysis core concepts, such as probability and inference, hypothesis testing, distributions, Z-scores and effect sizes Correlate that – get the lowdown on common procedures for defining relationships between variables, including linear regressions, associations between categorical data and more Analyse by inference – master key methods in inferential statistics, including techniques for analysing independent groups designs and repeated-measures research designs Open the book and find: Ways to describe statistical data How to use SPSS statistical software Probability theory and statistical inference Descriptive statistics basics How to test hypotheses Correlations and other relationships between variables Core concepts in statistical analysis for psychology Analysing research designs Learn to: Use SPSS to analyse data Master statistical methods and procedures using psychology-based explanations and examples Create better reports Identify key concepts and pass your course
Resumo:
Background: Elderly care systems have undergone a lot of changes in many European countries, including Finland. Most notably, the number of private for-profit firms has increased. Previous studies suggest that employee well-being and the quality of care might differ according to the ownership type.
Resumo:
Initial sizing procedures for aircraft stiffened panels that include the influence of welding fabrication residual process effects are missing. Herein, experimental and Finite Element analyses are coupled to generate knowledge to formulate an accurate and computationally efficient sizing procedure which will enable designers to routinely consider panel fabrication, via welding, accounting for the complex distortions and stresses induced by this manufacturing process. Validating experimental results demonstrate the need to consider welding induced material property degradation, residual stresses and distortions, as these can reduce static strength performance. However, results from fuselage and wing trade-studies, using the validated sizing procedure, establish that these potential reductions in strength performance may be overcome through local geometric tailoring during initial sizing, negating any weight penalty for the majority of design scenarios.
Resumo:
The collisionally excited transient inversion scheme is shown to produce exceptionally high gain coefficients and gain-length products. Data are presented for the Ne-Like titanium and germanium and Ni-like silver X-ray lasers (XRL's) pumped using a combination of nanosecond and picosecond duration laser pulses. This method leads to a dramatic reduction of the required pump energy and makes down-sizing of XRL's possible, an important prerequisite if they are to become commonly used tools in the Long-term.
Resumo:
In a dynamic reordering superscalar processor, the front-end fetches instructions and places them in the issue queue. Instructions are then issued by the back-end execution core. Till recently, the front-end was designed to maximize performance without considering energy consumption. The front-end fetches instructions as fast as it can until it is stalled by a filled issue queue or some other blocking structure. This approach wastes energy: (i) speculative execution causes many wrong-path instructions to be fetched and executed, and (ii) back-end execution rate is usually less than its peak rate, but front-end structures are dimensioned to sustained peak performance. Dynamically reducing the front-end instruction rate and the active size of front-end structure (e.g. issue queue) is a required performance-energy trade-off. Techniques proposed in the literature attack only one of these effects.
In previous work, we have proposed Speculative Instruction Window Weighting (SIWW) [21], a fetch gating technique that allows to address both fetch gating and instruction issue queue dynamic sizing. SIWW computes a global weight on the set of inflight instructions. This weight depends on the number and types of inflight instructions (non-branches, high confidence or low confidence branches, ...). The front-end instruction rate can be continuously adapted based on this weight. This paper extends the analysis of SIWW performed in previous work. It shows that SIWW performs better than previously proposed fetch gating techniques and that SIWW allows to dynamically adapt the size of the active instruction queue.
Resumo:
In this paper, a novel approach to automatically sub-divide a complex geometry and apply an efficient mesh is presented. Following the identification and removal of thin-sheet regions from an arbitrary solid using the thick/thin decomposition approach developed by Robinson et al. [1], the technique here employs shape metrics generated using local sizing measures to identify long-slender regions within the thick body. A series of algorithms automatically partition the thick region into a non-manifold assembly of long-slender and complex sub-regions. A structured anisotropic mesh is applied to the thin-sheet and long-slender bodies, and the remaining complex bodies are filled with unstructured isotropic tetrahedra. The resulting semi-structured mesh possesses significantly fewer degrees of freedom than the equivalent unstructured mesh, demonstrating the effectiveness of the approach. The accuracy of the efficient meshes generated for a complex geometry is verified via a study that compares the results of a modal analysis with the results of an equivalent analysis on a dense tetrahedral mesh.
Resumo:
Property lawyers are generally viewed as a serious lot, not prone to feverish bursts of excitement as we seek comfort and solace in established legal rules and precepts. In the same way, property law disputes tend to have a fairly low profile and fail to capture the public imagination in the same way as, for example, those involving criminal or human rights law. Such apparent indifference might seem a little strange, given the centrality of property in everyday human life and the significance which legal systems and individuals attach to property rights. However, there is one issue which always inflames passions amongst lawyers and non-lawyers alike: the acquisition of land through the doctrine of adverse possession, often described as ‘squatter’s rights’. No property-related topic is likely to light up a radio show phone-in switchboard quite like squatting
Resumo:
Since the UN report by the Brundtland Committee, sustainability in the built environment has mainly been seen from a technical focus on single buildings or products. With the energy efficiency approaching 100%, fossil resources depleting and a considerable part of the world still in need of better prosperity, the playing field of a technical focus has become very limited. It will most probably not lead to the sustainable development needed to avoid irreversible effects on climate, energy provision and, not least, society.
Cities are complex structures of independently functioning elements, all of which are nevertheless connected to different forms of infrastructure, which provide the necessary sources or solve the release of waste material. With the current ambitions regarding carbon- or energy-neutrality, retreating again to the scale of a building is likely to fail. Within an urban context a single building cannot become fully resource-independent, and need not, from our viewpoint. Cities should be considered as an organism that has the ability to intelligently exchange sources and waste flows. Especially in terms of energy, it can be made clear that the present situation in most cities are undesired: there is simultaneous demand for heat and cold, and in summer a lot of excess energy is lost, which needs to be produced again in winter. The solution for this is a system that intelligently exchanges and stores essential sources, e.g. energy, and that optimally utilises waste flows.
This new approach will be discussed and exemplified. The Rotterdam Energy Approach and Planning (REAP) will be illustrated as a means for urban planning, whereas Swarm Planning will be introduced as another nature-based principle for swift changes towards sustainability
Resumo:
Wireless sensor node platforms are very diversified and very constrained, particularly in power consumption. When choosing or sizing a platform for a given application, it is necessary to be able to evaluate in an early design stage the impact of those choices. Applied to the computing platform implemented on the sensor node, it requires a good understanding of the workload it must perform. Nevertheless, this workload is highly application-dependent. It depends on the data sampling frequency together with application-specific data processing and management. It is thus necessary to have a model that can represent the workload of applications with various needs and characteristics. In this paper, we propose a workload model for wireless sensor node computing platforms. This model is based on a synthetic application that models the different computational tasks that the computing platform will perform to process sensor data. It allows to model the workload of various different applications by tuning data sampling rate and processing. A case study is performed by modeling different applications and by showing how it can be used for workload characterization. © 2011 IEEE.
Resumo:
Frustration – the inability to simultaneously satisfy all interactions – occurs in a wide range of systems including neural networks, water ice and magnetic systems. An example of the latter is the so called spin-ice in pyrochlore materials [1] which have attracted a lot of interest not least due to the emergence of magnetic monopole defects when the ‘ice rules’ governing the local ordering breaks down [2]. However it is not possible to directly measure the frustrated property – the direction of the magnetic moments – in such spin ice systems with current experimental techniques. This problem can be solved by instead studying artificial spin-ice systems where the molecular magnetic moments are replaced by nanoscale ferromagnetic islands [3-8]. Two different arrangements of the ferromagnetic islands have been shown to exhibit spin ice behaviour: a square lattice maintaining four moments at each vertex [3,8] and the Kagome lattice which has only three moments per vertex but equivalent interactions between them [4-7]. Magnetic monopole defects have been observed in both types of lattices [7-8]. One of the challenges when studying these artificial spin-ice systems is that it is difficult to arrive at the fully demagnetised ground-state [6-8].
Here we present a study of the switching behaviour of building blocks of the Kagome lattice influenced by the termination of the lattice. Ferromagnetic islands of nominal size 1000 nm by 100 nm were fabricated in five island blocks using electron-beam lithography and lift-off techniques of evaporated 18 nm Permalloy (Ni80Fe20) films. Each block consists of a central island with four arms terminated by a different number and placement of ‘injection pads’, see Figure 1. The islands are single domain and magnetised along their long axis. The structures were grown on a 50 nm thick electron transparent silicon nitride membrane to allow TEM observation, which was back-coated with a 5 nm film of Au to prevent charge build-up during the TEM experiments.
To study the switching behaviour the sample was subjected to a magnetic field strong enough to magnetise all the blocks in one direction, see Figure 1. Each block obeys the Kagome lattice ‘ice-rules’ of “2-in, 1-out” or “1-in, 2-out” in this fully magnetised state. Fresnel mode Lorentz TEM images of the sample were then recorded as a magnetic field of increasing magnitude was applied in the opposite direction. While the Fresnel mode is normally used to image magnetic domain structures [9] for these types of samples it is possible to deduce the direction of the magnetisation from the Lorentz contrast [5]. All images were recorded at the same over-focus judged to give good Lorentz contrast.
The magnetisation was found to switch at different magnitudes of the applied field for nominally identical blocks. However, trends could still be identified: all the blocks with any injection pads, regardless of placement and number, switched the direction of the magnetisation of their central island at significantly smaller magnitudes of the applied magnetic field than the blocks without injection pads. It can therefore be concluded that the addition of an injection pad lowers the energy barrier to switching the connected island, acting as a nucleation site for monopole defects. In these five island blocks the defects immediately propagate through to the other side, but in a larger lattice the monopoles could potentially become trapped at a vertex and observed [10].
References
[1] M J Harris et al, Phys Rev Lett 79 (1997) p.2554.
[2] C Castelnovo, R Moessner and S L Sondhi, Nature 451 (2008) p. 42.
[3] R F Wang et al, Nature 439 (2006) 303.
[4] M Tanaka et al, Phys Rev B 73 (2006) 052411.
[5] Y Qi, T Brintlinger and J Cumings, Phys Rev B 77 (2008) 094418.
[6] E Mengotti et al, Phys Rev B 78 (2008) 144402.
[7] S Ladak et al, Nature Phys 6 (2010) 359.
[8] C Phatak et al, Phys Rev B 83 (2011) 174431.
[9] J N Chapman, J Phys D 17 (1984) 623.
[10] The authors gratefully acknowledge funding from the EPSRC under grant number EP/D063329/1.
Resumo:
Phylogenetic analysis of the sequence of the H gene of 75 measles virus (MV) strains (32 published and 43 new sequences) was carried out. The lineage groups described from comparison of the nucleotide sequences encoding the C-terminal regions of the N protein of MV were the same as those derived from the H gene sequences in almost all cases. The databases document a number of distinct genotype switches that have occurred in Madrid (Spain). Well-documented is the complete replacement of lineage group C2, the common European genotype at that time, with that of group D3 around the autumn of 1993. No further isolations of group C2 took place in Madrid after this time. The rate of mutation of the H gene sequences of MV genotype D3 circulating in Madrid from 1993 to 1996 was very low (5 x 10(-4) per annum for a given nucleotide position). This is an order of magnitude lower than the rates of mutation observed in the HN genes of human influenza A viruses. The ratio of expressed over silent mutations indicated that the divergence was not driven by immune selection in this gene. Variations in amino acid 117 of the H protein (F or L) may be related to the ability of some strains to haemagglutinate only in the presence of salt. Adaptation of MV to different primate cell types was associated with very small numbers of mutations in the H gene. The changes could not be predicted when virus previously grown in human B cell lines was adapted to monkey Vero cells. In contrast, rodent brain-adapted viruses displayed a lot of amino acid sequence variation from normal MV strains. There was no convincing evidence for recombination between MV genotypes.
Resumo:
The potential that laser based particle accelerators offer to solve sizing and cost issues arising with conventional proton therapy has generated great interest in the understanding and development of laser ion acceleration, and in investigating the radiobiological effects induced by laser accelerated ions. Laser-driven ions are produced in bursts of ultra-short duration resulting in ultra-high dose rates, and an investigation at Queen's University Belfast was carried out to investigate this virtually unexplored regime of cell rdaiobiology. This employed the TARANIS terawatt laser producing protons in the MeV range for proton irradiation, with dose rates exceeding 10 Gys on a single exposure. A clonogenic assay was implemented to analyse the biological effect of proton irradiation on V79 cells, which, when compared to data obtained with the same cell line irradiated with conventionally accelerated protons, was found to show no significant difference. A Relative Biological effectiveness of 1.4±0.2 at 10 % Survival Fraction was estimated from a comparison with a 225 kVp X-ray source. © 2013 SPIE.