933 resultados para Non-line-of-sight
Resumo:
MEG directly measures the neuronal events and has greater temporal resolution than fMRI, which has limited temporal resolution mainly due to the larger timescale of the hemodynamic response. On the other hand fMRI has advantages in spatial resolution, while the localization results with MEG can be ambiguous due to the non-uniqueness of the electromagnetic inverse problem. Thus, these methods could provide complementary information and could be used to create both spatially and temporally accurate models of brain function. We investigated the degree of overlap, revealed by the two imaging methods, in areas involved in sensory or motor processing in healthy subjects and neurosurgical patients. Furthermore, we used the spatial information from fMRI to construct a spatiotemporal model of the MEG data in order to investigate the sensorimotor system and to create a spatiotemporal model of its function. We compared the localization results from the MEG and fMRI with invasive electrophysiological cortical mapping. We used a recently introduced method, contextual clustering, for hypothesis testing of fMRI data and assessed the the effect of neighbourhood information use on the reproducibility of fMRI results. Using MEG, we identified the ipsilateral primary sensorimotor cortex (SMI) as a novel source area contributing to the somatosensory evoked fields (SEF) to median nerve stimulation. Using combined MEG and fMRI measurements we found that two separate areas in the lateral fissure may be the generators for the SEF responses from the secondary somatosensory cortex region. The two imaging methods indicated activation in corresponding locations. By using complementary information from MEG and fMRI we established a spatiotemporal model of somatosensory cortical processing. This spatiotemporal model of cerebral activity was in good agreement with results from several studies using invasive electrophysiological measurements and with anatomical studies in monkey and man concerning the connections between somatosensory areas. In neurosurgical patients, the MEG dipole model turned out to be more reliable than fMRI in the identification of the central sulcus. This was due to prominent activation in non-primary areas in fMRI, which in some cases led to erroneous or ambiguous localization of the central sulcus.
Resumo:
A promotional brochure celebrating the completion of the Seagram Building in spring 1957 features on its cover intense portraits of seven men bisected by a single line of bold text that asks, “Who are these Men?” The answer appears on the next page: “They Dreamed of a Tower of Light” (Figures 1, 2). Each photograph is reproduced with the respective man’s name and project credit: architects, Mies van der Rohe and Philip Johnson; associate architect, Eli Jacques Kahn; electrical contractor, Harry F. Fischbach; lighting consultant, Richard Kelly; and electrical engineer, Clifton E. Smith. To the right, a rendering of the new Seagram Tower anchors the composition, standing luminous against a star-speckled night sky; its glass walls and bronze mullions are transformed into a gossamer skin that reveals the tower’s structural skeleton. Lightolier, the contract lighting manufacturer, produced the brochure to promote its role in the lighting of the Seagram Building, but Lightolier’s promotional copy was not far from the truth.
Resumo:
Objective: To determine the extent to which different strength training exercises selectively activate the commonly injured biceps femoris long head (BFLH) muscle. Methods: This two-part observational study recruited 24 recreationally active males. Part 1 explored the amplitudes and the ratios of lateral to medial hamstring (BF/MH) normalised electromyography (nEMG) during the concentric and eccentric phases of 10 common strength training exercises. Part 2 used functional magnetic resonance imaging (fMRI) to determine the spatial patterns of hamstring activation during two exercises which i) most selectively, and ii) least selectively activated the BF in part 1. Results: Eccentrically, the largest BF/MH nEMG ratio was observed in the 45° hip extension exercise and the lowest was observed in the Nordic hamstring (NHE) and bent-knee bridge exercises. Concentrically, the highest BF/MH nEMG ratio was observed during the lunge and 45° hip extension and the lowest was observed for the leg curl and bent-knee bridge. fMRI revealed a greater BFLH to semitendinosus activation ratio in the 45° hip extension than the NHE (p<0.001). The T2 increase after hip extension for BFLH, semitendinosus and semimembranosus muscles were greater than that for BFSH (p<0.001). During the NHE, the T2 increase was greater for the semitendinosus than for the other hamstrings (p≤0.002). Conclusion: This investigation highlights the non-uniformity of hamstring activation patterns in different tasks and suggests that hip extension exercise more selectively activates the BFLH while the NHE preferentially recruits the semitendinosus. These findings have implications for strength training interventions aimed at preventing hamstring injury.
Resumo:
One of the major limitations to the application of high-resolution biophysical techniques such as X-crystallography and spectroscopic analyses to structure-function studies of Saccharomyces cerevisiae Hop1 protein has been the non-availability of sufficient quantities of functionally active pure protein. This has, indeed, been the case of many proteins, including yeast synaptonemal complex proteins. In this study, we have performed expression screening in Escherichia coli host strains, capable of high-level expression of soluble S. cerevisiae Hop1 protein. A new protocol has been developed for expression and purification of S. cerevisiae Hop1 protein, based on the presence of hexa-histidine tag and double-stranded DNA-Cellulose chromatography. Recombinant S. cerevisiae Hop1 protein was >98% pure and exhibited DNA-binding activity with high-affinity to the Holliday junction. The availability of the recombinant HOP1 expression vector and active Hop1 protein would facilitate structure-function investigations as well as the generation of appropriate truncated and site-directed mutant proteins, respectively. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
The TOTEM collaboration has developed and tested the first prototype of its Roman Pots to be operated in the LHC. TOTEM Roman Pots contain stacks of 10 silicon detectors with strips oriented in two orthogonal directions. To measure proton scattering angles of a few microradians, the detectors will approach the beam centre to a distance of 10 sigma + 0.5 mm (= 1.3 mm). Dead space near the detector edge is minimised by using two novel "edgeless" detector technologies. The silicon detectors are used both for precise track reconstruction and for triggering. The first full-sized prototypes of both detector technologies as well as their read-out electronics have been developed, built and operated. The tests took place first in a fixed-target muon beam at CERN's SPS, and then in the proton beam-line of the SPS accelerator ring. We present the test beam results demonstrating the successful functionality of the system despite slight technical shortcomings to be improved in the near future.
Resumo:
Polarisation characters of the Raman lines of calcium fluoride (fluorspar) and potassium aluminium sulphate (alum) were investigated under the following conditions. Unpolarised light was incident normally on a face of the crystal making an angle 22.5° with a cubic face and the light scattered transversely along a cubic axis was analysed by a double image prism kept with its principal axes inclined at 45° to the vertical. Under these conditions the depolarisation factors of the Raman lines belonging to the totally symmetric (A), the doubly degenerate (E) and the triply degenerate (F) modes should be respectively =1, >1 and <1. The characteristic Raman line of CaF2 at 322 cm-1 exhibited a depolarisation value less than 1, showing thereby that the corresponding mode is a triply degenerate one (F). The Raman lines observed in the spectrum of K-alum were also classified and the results were compared with those given by previous investigators using standard crystal orientations.
Resumo:
The overall performance of random early detection (RED) routers in the Internet is determined by the settings of their associated parameters. The non-availability of a functional relationship between the RED performance and its parameters makes it difficult to implement optimization techniques directly in order to optimize the RED parameters. In this paper, we formulate a generic optimization framework using a stochastically bounded delay metric to dynamically adapt the RED parameters. The constrained optimization problem thus formulated is solved using traditional nonlinear programming techniques. Here, we implement the barrier and penalty function approaches, respectively. We adopt a second-order nonlinear optimization framework and propose a novel four-timescale stochastic approximation algorithm to estimate the gradient and Hessian of the barrier and penalty objectives and update the RED parameters. A convergence analysis of the proposed algorithm is briefly sketched. We perform simulations to evaluate the performance of our algorithm with both barrier and penalty objectives and compare these with RED and a variant of it in the literature. We observe an improvement in performance using our proposed algorithm over RED, and the above variant of it.
Resumo:
Service researchers have repeatedly claimed that firms should acquire customer information in order to develop services that fit customer needs. Despite this, studies that would concentrate on the actual use of customer information in service development are lacking. The present study fulfils this research gap by investigating information use during a service development process. It demonstrates that use is not a straightforward task that automatically follows the acquisition of customer information. In fact, out of the six identified types of use, four represent non usage of customer information. Hence, the study demonstrates that the acquisition of customer information does not guarantee that the information will actually be used in development. The current study used an ethnographic approach. Consequently, the study was conducted in the field in real time over an extensive period of 13 months. Participant observation allowed direct access to the investigated phenomenon, i.e. the different types of use by the observed development project members were captured while they emerged. In addition, interviews, informal discussions and internal documents were used to gather data. A development process of a bank’s website constituted the empirical context of the investigation. This ethnography brings novel insights to both academia and practice. It critically questions the traditional focus on the firm’s acquisition of customer information and suggests that this focus ought to be expanded to the actual use of customer information. What is the point in acquiring costly customer information if it is not used in the development? Based on the findings of this study, a holistic view on customer information, “information in use” is generated. This view extends the traditional view of customer information in three ways: the source, timing and form of data collection. First, the study showed that the customer information can come explicitly from the customer, from speculation among the developers or it can already exist implicitly. Prior research has mainly focused on the customer as the information provider and the explicit source to turn to for information. Second, the study identified that the used and non-used customer information was acquired both previously, and currently within the time frame of the focal development process, as well as potentially in the future. Prior research has primarily focused on the currently acquired customer information, i.e. within the timeframe of the development process. Third, the used and non-used customer information was both formally and informally acquired. In prior research a large number of sophisticated formal methods have been suggested for the acquisition of customer information to be used in development. By focusing on “information in use”, new knowledge on types of customer information that are actually used was generated. For example, the findings show that the formal customer information acquired during the development process is used less than customer information already existent within the firm. With this knowledge at hand, better methods to capture this more usable customer information can be developed. Moreover, the thesis suggests that by focusing stronger on use of customer information, service development processes can be restructured in order to facilitate the information that is actually used.
Resumo:
Researchers within the fields of economic geography and organizational management have extensively studied learning and the prerequisites and impediments for knowledge transfer. This paper combines two discourses within the two subjects: the-communities-of-practice and the learning region approaches, merging them through the so-called ecology of knowledge-approach, which is used to examine the knowledge transfer from the House of Fabergé to the Finnish jewellery industry. We examine the pre-revolution St Petersburg jewellery cluster and the post-revolution Helsinki, and the transfer of knowledge between these two locations through the components of communities of people, institutions and industry. The paper shows that the industrial dynamics of the Finnish modern-day goldsmith industry was inherently shaped both through the transfer and the non-transfer of knowledge. It also contends that the “knowledge-economy” is not anchored in and exclusive for the high technology sector of the late 20th century.
Resumo:
This paper describes a predictive model for breakout noise from an elliptical duct or shell of finite length. The transmission mechanism is essentially that of ``mode coupling'', whereby higher structural modes in the duct walls get excited because of non-circularity of the wall. Effect of geometry has been taken care of by evaluating Fourier coefficients of the radius of curvature. The noise radiated from the duct walls is represented by that from a finite vibrating length of a semi infinite cylinder in a free field. Emphasis is on understanding the physics of the problem as well as analytical modeling. The analytical model is validated with 3-D FEM. Effects of the ovality, curvature, and axial terminations of the duct have been demonstrated. (C) 2010 Institute of Noise Control Engineering.
Resumo:
A modified linear prediction (MLP) method is proposed in which the reference sensor is optimally located on the extended line of the array. The criterion of optimality is the minimization of the prediction error power, where the prediction error is defined as the difference between the reference sensor and the weighted array outputs. It is shown that the L2-norm of the least-squares array weights attains a minimum value for the optimum spacing of the reference sensor, subject to some soft constraint on signal-to-noise ratio (SNR). How this minimum norm property can be used for finding the optimum spacing of the reference sensor is described. The performance of the MLP method is studied and compared with that of the linear prediction (LP) method using resolution, detection bias, and variance as the performance measures. The study reveals that the MLP method performs much better than the LP technique.
Resumo:
Many Finnish IT companies have gone through numerous organizational changes over the past decades. This book draws attention to how stability may be central to software product development experts and IT workers more generally, who continuously have to cope with such change in their workplaces. It does so by analyzing and theorizing change and stability as intertwined and co-existent, thus throwing light on how it is possible that, for example, even if ‘the walls fall down the blokes just code’ and maintain a sense of stability in their daily work. Rather than reproducing the picture of software product development as exciting cutting edge activities and organizational change as dramatic episodes, the study takes the reader beyond the myths surrounding these phenomena to the mundane practices, routines and organizings in product development during organizational change. An analysis of these ordinary practices offers insights into how software product development experts actively engage in constructing stability during organizational change through a variety of practices, including solidarity, homosociality, close relations to products, instrumental or functional views on products, preoccupations with certain tasks and humble obedience. Consequently, the study shows that it may be more appropriate to talk about varieties of stability, characterized by a multitude of practices of stabilizing rather than states of stagnation. Looking at different practices of stability in depth shows the creation of software as an arena for micro-politics, power relations and increasing pressures for order and formalization. The thesis gives particular attention to power relations and processes of positioning following organizational change: how social actors come to understand themselves in the context of ongoing organizational change, how they comply with and/or contest dominant meanings, how they identify and dis-identify with formalization, and how power relations often are reproduced despite dis-identification. Related to processes of positioning, the reader is also given a glimpse into what being at work in a male-dominated and relatively homogeneous work environment looks like. It shows how the strong presence of men or “blokes” of a particular age and education seems to become invisible in workplace talk that appears ‘non-conscious’ of gender.
Resumo:
A k-dimensional box is the Cartesian product R-1 X R-2 X ... X R-k where each R-i is a closed interval on the real line. The boxicity of a graph G, denoted as box(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-dimensional boxes. A unit cube in k-dimensional space or a k-cube is defined as the Cartesian product R-1 X R-2 X ... X R-k where each R-i is a closed interval oil the real line of the form a(i), a(i) + 1]. The cubicity of G, denoted as cub(G), is the minimum integer k such that G can be represented as the intersection graph of a collection of k-cubes. The threshold dimension of a graph G(V, E) is the smallest integer k such that E can be covered by k threshold spanning subgraphs of G. In this paper we will show that there exists no polynomial-time algorithm for approximating the threshold dimension of a graph on n vertices with a factor of O(n(0.5-epsilon)) for any epsilon > 0 unless NP = ZPP. From this result we will show that there exists no polynomial-time algorithm for approximating the boxicity and the cubicity of a graph on n vertices with factor O(n(0.5-epsilon)) for any epsilon > 0 unless NP = ZPP. In fact all these hardness results hold even for a highly structured class of graphs, namely the split graphs. We will also show that it is NP-complete to determine whether a given split graph has boxicity at most 3. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The crystal and molecular structure of the title compound has been determined by direct methods from diffractometer data. Crystals are orthorhombic, with Z= 4 in a unit cell of dimensions : a= 13.811 (10), b= 5.095(5), c= 12.914(10)Å, space group P212121. The structure was refined by least-squares to R 3.31% for 868 observed reflections. There is significant non-planarity of the peptide group and its nitrogen atom is significantly pyramidal. There is no correlation between the double-bond character and reactivity of the C–N bond of the terminal amide group in glutamine and acetamide
Resumo:
Various Tb theorems play a key role in the modern harmonic analysis. They provide characterizations for the boundedness of Calderón-Zygmund type singular integral operators. The general philosophy is that to conclude the boundedness of an operator T on some function space, one needs only to test it on some suitable function b. The main object of this dissertation is to prove very general Tb theorems. The dissertation consists of four research articles and an introductory part. The framework is general with respect to the domain (a metric space), the measure (an upper doubling measure) and the range (a UMD Banach space). Moreover, the used testing conditions are weak. In the first article a (global) Tb theorem on non-homogeneous metric spaces is proved. One of the main technical components is the construction of a randomization procedure for the metric dyadic cubes. The difficulty lies in the fact that metric spaces do not, in general, have a translation group. Also, the measures considered are more general than in the existing literature. This generality is genuinely important for some applications, including the result of Volberg and Wick concerning the characterization of measures for which the analytic Besov-Sobolev space embeds continuously into the space of square integrable functions. In the second article a vector-valued extension of the main result of the first article is considered. This theorem is a new contribution to the vector-valued literature, since previously such general domains and measures were not allowed. The third article deals with local Tb theorems both in the homogeneous and non-homogeneous situations. A modified version of the general non-homogeneous proof technique of Nazarov, Treil and Volberg is extended to cover the case of upper doubling measures. This technique is also used in the homogeneous setting to prove local Tb theorems with weak testing conditions introduced by Auscher, Hofmann, Muscalu, Tao and Thiele. This gives a completely new and direct proof of such results utilizing the full force of non-homogeneous analysis. The final article has to do with sharp weighted theory for maximal truncations of Calderón-Zygmund operators. This includes a reduction to certain Sawyer-type testing conditions, which are in the spirit of Tb theorems and thus of the dissertation. The article extends the sharp bounds previously known only for untruncated operators, and also proves sharp weak type results, which are new even for untruncated operators. New techniques are introduced to overcome the difficulties introduced by the non-linearity of maximal truncations.