918 resultados para Perfusion-weighted electroencephalography


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A model comprising several servers, each equipped with its own queue and with possibly different service speeds, is considered. Each server receives a dedicated arrival stream of jobs; there is also a stream of generic jobs that arrive to a job scheduler and can be individually allocated to any of the servers. It is shown that if the arrival streams are all Poisson and all jobs have the same exponentially distributed service requirements, the probabilistic splitting of the generic stream that minimizes the average job response time is such that it balances the server idle times in a weighted least-squares sense, where the weighting coefficients are related to the service speeds of the servers. The corresponding result holds for nonexponentially distributed service times if the service speeds are all equal. This result is used to develop adaptive quasi-static algorithms for allocating jobs in the generic arrival stream when the load parameters are unknown. The algorithms utilize server idle-time measurements which are sent periodically to the central job scheduler. A model is developed for these measurements, and the result mentioned is used to cast the problem into one of finding a projection of the root of an affine function, when only noisy values of the function can be observed

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Various Tb theorems play a key role in the modern harmonic analysis. They provide characterizations for the boundedness of Calderón-Zygmund type singular integral operators. The general philosophy is that to conclude the boundedness of an operator T on some function space, one needs only to test it on some suitable function b. The main object of this dissertation is to prove very general Tb theorems. The dissertation consists of four research articles and an introductory part. The framework is general with respect to the domain (a metric space), the measure (an upper doubling measure) and the range (a UMD Banach space). Moreover, the used testing conditions are weak. In the first article a (global) Tb theorem on non-homogeneous metric spaces is proved. One of the main technical components is the construction of a randomization procedure for the metric dyadic cubes. The difficulty lies in the fact that metric spaces do not, in general, have a translation group. Also, the measures considered are more general than in the existing literature. This generality is genuinely important for some applications, including the result of Volberg and Wick concerning the characterization of measures for which the analytic Besov-Sobolev space embeds continuously into the space of square integrable functions. In the second article a vector-valued extension of the main result of the first article is considered. This theorem is a new contribution to the vector-valued literature, since previously such general domains and measures were not allowed. The third article deals with local Tb theorems both in the homogeneous and non-homogeneous situations. A modified version of the general non-homogeneous proof technique of Nazarov, Treil and Volberg is extended to cover the case of upper doubling measures. This technique is also used in the homogeneous setting to prove local Tb theorems with weak testing conditions introduced by Auscher, Hofmann, Muscalu, Tao and Thiele. This gives a completely new and direct proof of such results utilizing the full force of non-homogeneous analysis. The final article has to do with sharp weighted theory for maximal truncations of Calderón-Zygmund operators. This includes a reduction to certain Sawyer-type testing conditions, which are in the spirit of Tb theorems and thus of the dissertation. The article extends the sharp bounds previously known only for untruncated operators, and also proves sharp weak type results, which are new even for untruncated operators. New techniques are introduced to overcome the difficulties introduced by the non-linearity of maximal truncations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the major tasks in swarm intelligence is to design decentralized but homogenoeus strategies to enable controlling the behaviour of swarms of agents. It has been shown in the literature that the point of convergence and motion of a swarm of autonomous mobile agents can be controlled by using cyclic pursuit laws. In cyclic pursuit, there exists a predefined cyclic connection between agents and each agent pursues the next agent in the cycle. In this paper we generalize this idea to a case where an agent pursues a point which is the weighted average of the positions of the remaining agents. This point correspond to a particular pursuit sequence. Using this concept of centroidal cyclic pursuit, the behavior of the agents is analyzed such that, by suitably selecting the agents' gain, the rendezvous point of the agents can be controlled, directed linear motion of the agents can be achieved, and the trajectories of the agents can be changed by switching between the pursuit sequences keeping some of the behaviors of the agents invariant. Simulation experiments are given to support the analytical proofs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Measurements of the electrical resistivity of thin potassium wires at temperatures near 1 K have revealed a minimum in the resistivity as a function of temperature. By proposing that the electrons in these wires have undergone localization, albeit with large localization length, and that inelastic-scattering events destroy the coherence of that state, we can explain both the magnitude and shape of the temperature-dependent resistivity data. Localization of electrons in these wires is to be expected because, due to the high purity of the potassium, the elastic mean free path is comparable to the diameters of the thinnest samples, making the Thouless length lT (or inelastic diffusion length) much larger than the diameter, so that the wire is effectively one dimensional. The inelastic events effectively break the wire into a series of localized segments, whose resistances can be added to obtain the total resistance of the wire. The ensemble-averaged resistance for all possible segmented wires, weighted with a Poisson distribution of inelastic-scattering lengths along the wire, yields a length dependence for the resistance that is proportional to [L3/lin(T)], provided that lin(T)?L, where L is the sample length and lin(T) is some effective temperature-dependent one-dimensional inelastic-scattering length. A more sophisticated approach using a Poisson distribution in inelastic-scattering times, which takes into account the diffusive motion of the electrons along the wire through the Thouless length, yields a length- and temperature-dependent resistivity proportional to (L/lT)4 under appropriate conditions. Inelastic-scattering lifetimes are inferred from the temperature-dependent bulk resistivities (i.e., those of thicker, effectively three-dimensional samples), assuming that a minimum amount of energy must be exchanged for a collision to be effective in destroying the phase coherence of the localized state. If the dominant inelastic mechanism is electron-electron scattering, then our result, given the appropriate choice of the channel number parameter, is consistent with the data. If electron-phason scattering were of comparable importance, then our results would remain consistent. However, the inelastic-scattering lifetime inferred from bulk resistivity data is too short. This is because the electron-phason mechanism dominates in the inelastic-scattering rate, although the two mechanisms may be of comparable importance for the bulk resistivity. Possible reasons why the electron-phason mechanism might be less effective in thin wires than in bulk are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the most important dynamic properties required in the design of machine foundations is the stiffness or spring constant of the supporting soil. For a layered soil system, the stiffness obtained from an idealization of soils underneath as springs in series gives the same value of stiffness regardless of the location and extent of individual soil layers with respect to the base of the foundation. This paper aims to develop the importance of the relative positioning of soil layers and their thickness beneath the foundation. A simple and approximate procedure called the weighted average method has been proposed to obtain the equivalent stiffness of a layered soil system knowing the individual values of the layers, their relative position with respect to foundation base, and their thicknesses. The theoretically estimated values from the weighted average method are compared with those obtained by conducting field vibration tests using a square footing over different two- and three-layered systems and are found to be very good. The tests were conducted over a range of static and dynamic loads using three different materials. The results are also compared with the existing methods available in the literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

From a find to an ancient costume - reconstruction of archaeological textiles Costume tells who we are. It warms and protects us, but also tells about our identity: gender, age, family, social group, work, religion and ethnicity. Textile fabrication, use and trade have been an important part of human civilization for more than 10 000 years. There are plenty of archaeological textile findings, but they are small, fragmentary and their interpretation requires special skills. Finnish textile findings from the younger Iron Age have already been studied for more than hundred years. They have also been used as a base for several reconstructions called muinaispuku , ancient costume. Thesis surveys the ancient costume reconstruction done in Finland and discusses the objectives of the reconstruction projects. The earlier reconstruction projects are seen as a part of the national project of constructing a glorious past for Finnish nationality, and the part women took in this project. Many earlier reconstructions are designed to be festive costumes for wealthy ladies. In the 1980s and 1990s many new ancient costume reconstructions were made, differing from their predecessors at the pattern of the skirt. They were also done following the principles of making a scientific reconstruction more closely. At the same time historical re-enactment and living history as a hobby have raised in popularity, and the use of ancient costumes is widening from festive occasions to re-enactment purposes. A hypothesis of the textile craft methods used in younger Iron Age Finland is introduced. Archaeological findings from Finland and neighboring countries, ethnological knowledge of textile crafts and experimental archaeology have been used as a basis for this proposition. The yarn was spinned with a spindle, the fabrics woven on a warp-weighted loom and dyed with natural colors. Bronze spiral applications and complicated tablet-woven bands have possibly been done by specialist craftswomen or -men. The knowledge of the techniques and results of experimenting and experimental archaeology gives a possibility to review the success of existing ancient costume reconstructions as scientific reconstructions. Only one costume reconstruction project, the Kaarina costume fabricated in Kurala Kylämäki museum, has been done using as authentic methods as possible. The use of ancient craft methods is time-consuming and expensive. This fact can be seen as one research result itself for it demonstrates how valuable the ancient textiles have been also in their time of use. In the costume reconstruction work, the skill of a craftswoman and her knowledge of ancient working methods is strongly underlined. Textile research is seen as a process, where examination of original textiles and reconstruction experiments discuss with each other. Reconstruction projects can give a lot both to the research of Finnish younger Iron Age and the popularization of archaeological knowledge. The reconstruction is never finished, and also the earlier reconstructions should be reviewed in the light of new findings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problem of computing an approximate minimum cycle basis of an undirected non-negative edge-weighted graph G with m edges and n vertices; the extension to directed graphs is also discussed. In this problem, a {0,1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of the weights of the cycles is minimum is called a minimum cycle basis of G. Cycle bases of low weight are useful in a number of contexts, e.g. the analysis of electrical networks, structural engineering, chemistry, and surface reconstruction. Although in most such applications any cycle basis can be used, a low weight cycle basis often translates to better performance and/or numerical stability. Despite the fact that the problem can be solved exactly in polynomial time, we design approximation algorithms since the performance of the exact algorithms may be too expensive for some practical applications. We present two new algorithms to compute an approximate minimum cycle basis. For any integer k >= 1, we give (2k - 1)-approximation algorithms with expected running time O(kmn(1+2/k) + mn((1+1/k)(omega-1))) and deterministic running time O(n(3+2/k) ), respectively. Here omega is the best exponent of matrix multiplication. It is presently known that omega < 2.376. Both algorithms are o(m(omega)) for dense graphs. This is the first time that any algorithm which computes sparse cycle bases with a guarantee drops below the Theta(m(omega) ) bound. We also present a 2-approximation algorithm with expected running time O(M-omega root n log n), a linear time 2-approximation algorithm for planar graphs and an O(n(3)) time 2.42-approximation algorithm for the complete Euclidean graph in the plane.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we address a scheduling problem for minimising total weighted tardiness. The motivation for the paper comes from the automobile gear manufacturing process. We consider the bottleneck operation of heat treatment stage of gear manufacturing. Real life scenarios like unequal release times, incompatible job families, non-identical job sizes and allowance for job splitting have been considered. A mathematical model taking into account dynamic starting conditions has been developed. Due to the NP-hard nature of the problem, a few heuristic algorithms have been proposed. The performance of the proposed heuristic algorithms is evaluated: (a) in comparison with optimal solution for small size problem instances, and (b) in comparison with `estimated optimal solution' for large size problem instances. Extensive computational analyses reveal that the proposed heuristic algorithms are capable of consistently obtaining near-optimal solutions (that is, statistically estimated one) in very reasonable computational time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose F-norm of the cross-correlation part of the array covariance matrix as a measure of correlation between the impinging signals and study the performance of different decorrelation methods in the broadband case using this measure. We first show that dimensionality of the composite signal subspace, defined as the number of significant eigenvectors of the source sample covariance matrix, collapses in the presence of multipath and the spatial smoothing recovers this dimensionality. Using an upper bound on the proposed measure, we then study the decorrelation of the broadband signals with spatial smoothing and the effect of spacing and directions of the sources on the rate of decorrelation with progressive smoothing. Next, we introduce a weighted smoothing method based on Toeplitz-block-Toeplitz (TBT) structuring of the data covariance matrix which decorrelates the signals much faster than the spatial smoothing. Computer simulations are included to demonstrate the performance of the two methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The weighted-least-squares method using sensitivity-analysis technique is proposed for the estimation of parameters in water-distribution systems. The parameters considered are the Hazen-Williams coefficients for the pipes. The objective function used is the sum of the weighted squares of the differences between the computed and the observed values of the variables. The weighted-least-squares method can elegantly handle multiple loading conditions with mixed types of measurements such as heads and consumptions, different sets and number of measurements for each loading condition, and modifications in the network configuration due to inclusion or exclusion of some pipes affected by valve operations in each loading condition. Uncertainty in parameter estimates can also be obtained. The method is applied for the estimation of parameters in a metropolitan urban water-distribution system in India.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the problem of automated multiagent search in an unknown environment. Autonomous agents equipped with sensors carry out a search operation in a search space, where the uncertainty, or lack of information about the environment, is known a priori as an uncertainty density distribution function. The agents are deployed in the search space to maximize single step search effectiveness. The centroidal Voronoi configuration, which achieves a locally optimal deployment, forms the basis for the proposed sequential deploy and search strategy. It is shown that with the proposed control law the agent trajectories converge in a globally asymptotic manner to the centroidal Voronoi configuration. Simulation experiments are provided to validate the strategy. Note to Practitioners-In this paper, searching an unknown region to gather information about it is modeled as a problem of using search as a means of reducing information uncertainty about the region. Moreover, multiple automated searchers or agents are used to carry out this operation optimally. This problem has many applications in search and surveillance operations using several autonomous UAVs or mobile robots. The concept of agents converging to the centroid of their Voronoi cells, weighted with the uncertainty density, is used to design a search strategy named as sequential deploy and search. Finally, the performance of the strategy is validated using simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using intensity autocorrelation of multiply scattered light, we show that the increase in interparticle interaction in dense, binary colloidal fluid mixtures of particle diameters 0.115µm and 0.089µm results in freezing into a crystalline phase at volume fraction? of 0.1 and into a glassy state at?=0.2. The functional form of the field autocorrelation functiong (1)(t) for the binary fluid phase is fitted to exp[??(6k 0 2 D eff t)1/2] wherek 0 is the magnitude of the incident light wavevector and? is a parameter inversely proportional to the photon transport mean free pathl*. TheD eff is thel* weighted average of the individual diffusion coefficients of the pure species. Thel* used in calculatingD eff was computed using the Mie theory. In the solid (crystal or glass) phase, theg (1)(t) is fitted (only with a moderate success) to exp[??(6k 0 2 W(t))1/2] where the mean-squared displacementW(t) is evaluated for a harmonically bound overdamped Brownian oscillator. It is found that the fitted parameter? for both the binary and monodisperse suspensions decreases significantly with the increase of interparticle interactions. This has been justified by showing that the calculated values ofl* in a monodisperse suspension using Mie theory increase very significantly with the interactions incorporated inl* via the static structure factor.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The weighted-least-squares method based on the Gauss-Newton minimization technique is used for parameter estimation in water distribution networks. The parameters considered are: element resistances (single and/or group resistances, Hazen-Williams coefficients, pump specifications) and consumptions (for single or multiple loading conditions). The measurements considered are: nodal pressure heads, pipe flows, head loss in pipes, and consumptions/inflows. An important feature of the study is a detailed consideration of the influence of different choice of weights on parameter estimation, for error-free data, noisy data, and noisy data which include bad data. The method is applied to three different networks including a real-life problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Perfusion of liver with plasmid DNA-lipofectin complexes via the portal vein results in efficient accumulation of the vector in hepatocytes. Such hepatocytes, when administered intraperitoneally into a hepatectomized rat, repopulate the liver and express the transgene efficiently. This procedure obviates the need for large-scale hepatocyte culture for ex vivo gene transfer. Further, intraperitoneal transplantation is a simple and cost-effective strategy of introducing genetically modified hepatocytes into liver. Thus, in situ lipofection of liver and intraperitoneal transfer of hepatocytes can be developed into a novel method of non-viral ex vivo gene transfer technique that has applications in the treatment of metabolic disorders of liver and hepatic gene therapy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seizure electroencephalography (EEG) was recorded from two channels-right (Rt) and left (Lt)-during bilateral electroconvulsive therapy (ECT) (n = 12) and unilateral ECT (n = 12). The EEG was also acquired into a microcomputer and was analyzed without knowledge of the clinical details. EEG recordings of both ECT procedures yielded seizures of comparable duration. The Strength Symmetry Index (SSI) was computed from the early- and midseizure phases using the fractal dimension of the EEG. The seizures of unilateral ECT were characterized by significantly smaller SSI in both phases. More unilateral than bilateral ECT seizures had a smaller than median SSI in both phases. The seizures also differed on other measures as reported in the literature. The findings indicate that SSI may be a potential measure of seizure adequacy that remains to be validated in future research.