929 resultados para internet-based application components


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, the influence on corrugation of the most significant track parameters has been examined. After this parametric study, the optimization of the track parameters to minimize the undulatory wear growth has been achieved. Finally, the influence of the dispersion of the track and contact parameters on corrugation growth has been studied. A method has been developed to obtain an optimal solution of the track parameters which minimizes corrugation growth, thus ensuring that this solution remains optimum despite dispersion of track parameters and wheel-rail contact uncertainties. This work is based on the computer application RACING (RAil Corrugation INitiation and Growth) which has been developed by the authors to predict rail corrugation features.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cloud-based infrastructure essentially comprises two offerings, cloud-based compute and cloud-based storage. These are perhaps best typified for most people by the two main components of the Amazon Web Services (AWS)1 public cloud offer, the Elastic Compute Cloud (EC2)2 and the Simple Storage Service (S3)3, though, of course, there are many other related services offered by Amazon and many other providers of similar public cloud infrastructure across the Internet.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Myerscough College, a land-based further and higher education college in the north west, is one of the approximately 160 further education colleges in England to take additional connections to Jisc’s Janet network. Ian Brown, director of IT and MIS at the college, talks to us about why they’ve taken an extra four connections.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Estimates of dolphin school sizes made by observers and crew members aboard tuna seiners or by observers on ship or aerial surveys are important components of population estimates of dolphins which are involved in the yellowfin tuna fishery in the eastern Pacific. Differences in past estimates made from tuna seiners and research ships and aircraft have been noted by Brazier (1978). To compare various methods of estimating dolphin school sizes a research cruise was undertaken with the following major objectives: 1) compare estimates made by observers aboard a tuna seiner and in the ship's helicopter, from aerial photographs, and from counts made at the backdown channel, 2) compare estimates of observers who are told the count of the school size after making their estimate to the observer who is not aware of the count to determine if observers can learn to estimate more accurately, and 3) obtain movie and still photographs of dolphin schools of known size at various stages of chase, capture and release to be used for observer training. The secondary objectives of the cruise were to: 1) obtain life history specimens and data from any dolphins that were killed incidental to purse seining. These specimens and data were to be analyzed by the U.S. National Marine Fisheries Service ( NMFS ) , 2) record evasion tactics of dolphin schools by observing them from the helicopter while the seiner approached the school, 3) examine alternative methods for estimating the distance and bearing of schools where they were first sighted, 4) collect the Commission's standard cetacean sighting, set log and daily activity data and expendable bathythermograph data. (PDF contains 31 pages.)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Light microscopy has been one of the most common tools in biological research, because of its high resolution and non-invasive nature of the light. Due to its high sensitivity and specificity, fluorescence is one of the most important readout modes of light microscopy. This thesis presents two new fluorescence microscopic imaging techniques: fluorescence optofluidic microscopy and fluorescent Talbot microscopy. The designs of the two systems are fundamentally different from conventional microscopy, which makes compact and portable devices possible. The components of the devices are suitable for mass-production, making the microscopic imaging system more affordable for biological research and clinical diagnostics.

Fluorescence optofluidic microscopy (FOFM) is capable of imaging fluorescent samples in fluid media. The FOFM employs an array of Fresnel zone plates (FZP) to generate an array of focused light spots within a microfluidic channel. As a sample flows through the channel and across the array of focused light spots, a filter-coated CMOS sensor collects the fluorescence emissions. The collected data can then be processed to render a fluorescence microscopic image. The resolution, which is determined by the focused light spot size, is experimentally measured to be 0.65 μm.

Fluorescence Talbot microscopy (FTM) is a fluorescence chip-scale microscopy technique that enables large field-of-view (FOV) and high-resolution imaging. The FTM method utilizes the Talbot effect to project a grid of focused excitation light spots onto the sample. The sample is placed on a filter-coated CMOS sensor chip. The fluorescence emissions associated with each focal spot are collected by the sensor chip and are composed into a sparsely sampled fluorescence image. By raster scanning the Talbot focal spot grid across the sample and collecting a sequence of sparse images, a filled-in high-resolution fluorescence image can be reconstructed. In contrast to a conventional microscope, a collection efficiency, resolution, and FOV are not tied to each other for this technique. The FOV of FTM is directly scalable. Our FTM prototype has demonstrated a resolution of 1.2 μm, and the collection efficiency equivalent to a conventional microscope objective with a 0.70 N.A. The FOV is 3.9 mm × 3.5 mm, which is 100 times larger than that of a 20X/0.40 N.A. conventional microscope objective. Due to its large FOV, high collection efficiency, compactness, and its potential for integration with other on-chip devices, FTM is suitable for diverse applications, such as point-of-care diagnostics, large-scale functional screens, and long-term automated imaging.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The degeneration of the outer retina usually causes blindness by affecting the photoreceptor cells. However, the ganglion cells, which consist of optic nerves, on the middle and inner retina layers are often intact. The retinal implant, which can partially restore vision by electrical stimulation, soon becomes a focus for research. Although many groups worldwide have spent a lot of effort on building devices for retinal implant, current state-of-the-art technologies still lack a reliable packaging scheme for devices with desirable high-density multi-channel features. Wireless flexible retinal implants have always been the ultimate goal for retinal prosthesis. In this dissertation, the reliable packaging scheme for a wireless flexible parylene-based retinal implants has been well developed. It can not only provide stable electrical and mechanical connections to the high-density multi-channel (1000+ channels on 5 mm × 5 mm chip area) IC chips, but also survive for more than 10 years in the human body with corrosive fluids.

The device is based on a parylene-metal-parylene sandwich structure. In which, the adhesion between the parylene layers and the metals embedded in the parylene layers have been studied. Integration technology for high-density multi-channel IC chips has also been addressed and tested with dummy and real 268-channel and 1024-channel retinal IC chips. In addition, different protection schemes have been tried in application to IC chips and discrete components to gain the longest lifetime. The effectiveness has been confirmed by the accelerated and active lifetime soaking test in saline solution. Surgical mockups have also been designed and successfully implanted inside dog's and pig's eyes. Additionally, the electrodes used to stimulate the ganglion cells have been modified to lower the interface impedance and shaped to better fit the retina. Finally, all the developed technologies have been applied on the final device with a dual-metal-layer structure.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis presents a simplified state-variable method to solve for the nonstationary response of linear MDOF systems subjected to a modulated stationary excitation in both time and frequency domains. The resulting covariance matrix and evolutionary spectral density matrix of the response may be expressed as a product of a constant system matrix and a time-dependent matrix, the latter can be explicitly evaluated for most envelopes currently prevailing in engineering. The stationary correlation matrix of the response may be found by taking the limit of the covariance response when a unit step envelope is used. The reliability analysis can then be performed based on the first two moments of the response obtained.

The method presented facilitates obtaining explicit solutions for general linear MDOF systems and is flexible enough to be applied to different stochastic models of excitation such as the stationary models, modulated stationary models, filtered stationary models, and filtered modulated stationary models and their stochastic equivalents including the random pulse train model, filtered shot noise, and some ARMA models in earthquake engineering. This approach may also be readily incorporated into finite element codes for random vibration analysis of linear structures.

A set of explicit solutions for the response of simple linear structures subjected to modulated white noise earthquake models with four different envelopes are presented as illustration. In addition, the method has been applied to three selected topics of interest in earthquake engineering, namely, nonstationary analysis of primary-secondary systems with classical or nonclassical dampings, soil layer response and related structural reliability analysis, and the effect of the vertical components on seismic performance of structures. For all the three cases, explicit solutions are obtained, dynamic characteristics of structures are investigated, and some suggestions are given for aseismic design of structures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The quasicontinuum (QC) method was introduced to coarse-grain crystalline atomic ensembles in order to bridge the scales from individual atoms to the micro- and mesoscales. Though many QC formulations have been proposed with varying characteristics and capabilities, a crucial cornerstone of all QC techniques is the concept of summation rules, which attempt to efficiently approximate the total Hamiltonian of a crystalline atomic ensemble by a weighted sum over a small subset of atoms. In this work we propose a novel, fully-nonlocal, energy-based formulation of the QC method with support for legacy and new summation rules through a general energy-sampling scheme. Our formulation does not conceptually differentiate between atomistic and coarse-grained regions and thus allows for seamless bridging without domain-coupling interfaces. Within this structure, we introduce a new class of summation rules which leverage the affine kinematics of this QC formulation to most accurately integrate thermodynamic quantities of interest. By comparing this new class of summation rules to commonly-employed rules through analysis of energy and spurious force errors, we find that the new rules produce no residual or spurious force artifacts in the large-element limit under arbitrary affine deformation, while allowing us to seamlessly bridge to full atomistics. We verify that the new summation rules exhibit significantly smaller force artifacts and energy approximation errors than all comparable previous summation rules through a comprehensive suite of examples with spatially non-uniform QC discretizations in two and three dimensions. Due to the unique structure of these summation rules, we also use the new formulation to study scenarios with large regions of free surface, a class of problems previously out of reach of the QC method. Lastly, we present the key components of a high-performance, distributed-memory realization of the new method, including a novel algorithm for supporting unparalleled levels of deformation. Overall, this new formulation and implementation allows us to efficiently perform simulations containing an unprecedented number of degrees of freedom with low approximation error.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Guided by experience and the theoretical development of hydrobiology, it can be considered that the main aim of water quality control should be the establishment of the rates of the self-purification process of water bodies which are capable of maintaining communities in a state of dynamic balance without changing the integrity of the ecosystem. Hence, general approaches in the elaboration of methods for hydrobiological control are based on the following principles: a. the balance of matter and energy in water bodies; b. the integrity of the ecosystem structure and of its separate components at all levels. Ecosystem analysis makes possible a revelation of the whole totality of factors which determine the anthropogenic evolution of a water body. This is necessary for the study of long-term changes in water bodies. The principles of ecosystem analysis of water bodies, together with the creation of their mathematical models, are important because, in future, with the transition of water demanding production into closed cycles of water supply, changes in water bodies will arise in the main through the influence of 'diffuse' pollution (from the atmosphere, with utilisation in transport etc.).