891 resultados para basis of the solution space of a homogeneous sparse linear system
Resumo:
6 p.
Resumo:
This document contains a report on the work done under the ESA/Ariadna study 06/4101 on the global optimization of space trajectories with multiple gravity assist (GA) and deep space manoeuvres (DSM). The study was performed by a joint team of scientists from the University of Reading and the University of Glasgow.
Resumo:
Introduction: There has been a continuous development of new technologies in healthcare that are derived from national quality registries. However, this innovation needs to be translated into the workflow of healthcare delivery, to enable children with long-term conditions to get the best support possible to manage their health during everyday life. Since children living with long-term conditions experience different interference levels in their lives, healthcare professionals need to assess the impact of care on children’s day-to-day lives, as a complement to biomedical assessments. Aim: The overall aim of this thesis was to explore and describe the use of instruments about health-related quality of life (HRQOL) in outpatient care for children with long-term conditions on the basis of a national quality registry system. Methods: The research was conducted by using comparative, cross-sectional and explorative designs and data collection was performed by using different methods. The questionnaire DISABKIDS Chronic Generic Measure -37 was used as well as semi-structured interviews and video-recordings from consultations. Altogether, 156 children (8–18 years) and nine healthcare professionals participated in the studies. Children with Type 1 Diabetes (T1D) (n 131) answered the questionnaire DISABKIDS and children with rheumatic diseases, kidney diseases and T1D (n 25) were interviewed after their consultation at the outpatient clinic after the web-DISABKIDS had been used. In total, nine healthcare professionals used the HRQOL instrument as an assessment tool during the encounters which was video-recorded (n 21). Quantitative deductive content analysis was used to describe content in different HRQOL instruments. Statistical inference was used to analyse results from DISABKIDS and qualitative content analysis was used to analyse the interviews and video-recordings. Results: The findings showed that based on a biopsychosocial perspective, both generic and disease-specific instruments should be used to gain a comprehensive evaluation of the child’s HRQOL. The DISABKIDS instrument is applicable when describing different aspects of health concerning children with T1D. When DISABKIDS was used in the encounters, children expressed positive experiences about sharing their results with the healthcare professional. It was discovered that different approaches led to different outcomes for the child when the healthcare professionals were using DISABKIDS during the encounter. When an instructing approach is used, the child’s ability to learn more about their health and how to improve their health is limited. When an inviting or engaging approach is used by the professional, the child may become more involved during the conversations. Conclusions: It could be argued that instruments of HRQOL could be used as a complement to biomedical variables, to promote a biopsychosocial perspective on the child’s health. According to the children in this thesis, feedback on their results after answering to web-DISABKIDS is important, which implies that healthcare professionals need to prioritize time for discussions about results from HRQOL instruments in the encounters. If healthcare professionals involve the child in the discussion of the results of the HRQOL, misinterpreted answers could be corrected during the conversation. Concurrently, this claims that healthcare professionals invite and engage the child.
Resumo:
MSC 2010: 05C50, 15A03, 15A06, 65K05, 90C08, 90C35
Resumo:
The natural modes of a non-linear system with two degrees of freedom are investigated. The system, which may contain either hard or soft springs, is shown to possess three modes of vibration one of which does not have any counterpart in the linear theory. The stability analysis indicates the existence of seven different modal stability patterns depending on the values of two parameters of non-linearity.
Resumo:
"Supported in part by the Advanced Research Projects Agency ... contract no. US AF 30(602) 4144."
Resumo:
This paper defines the 3D reconstruction problem as the process of reconstructing a 3D scene from numerous 2D visual images of that scene. It is well known that this problem is ill-posed, and numerous constraints and assumptions are used in 3D reconstruction algorithms in order to reduce the solution space. Unfortunately, most constraints only work in a certain range of situations and often constraints are built into the most fundamental methods (e.g. Area Based Matching assumes that all the pixels in the window belong to the same object). This paper presents a novel formulation of the 3D reconstruction problem, using a voxel framework and first order logic equations, which does not contain any additional constraints or assumptions. Solving this formulation for a set of input images gives all the possible solutions for that set, rather than picking a solution that is deemed most likely. Using this formulation, this paper studies the problem of uniqueness in 3D reconstruction and how the solution space changes for different configurations of input images. It is found that it is not possible to guarantee a unique solution, no matter how many images are taken of the scene, their orientation or even how much color variation is in the scene itself. Results of using the formulation to reconstruct a few small voxel spaces are also presented. They show that the number of solutions is extremely large for even very small voxel spaces (5 x 5 voxel space gives 10 to 10(7) solutions). This shows the need for constraints to reduce the solution space to a reasonable size. Finally, it is noted that because of the discrete nature of the formulation, the solution space size can be easily calculated, making the formulation a useful tool to numerically evaluate the usefulness of any constraints that are added.
Resumo:
In this paper we propose a case base reduction technique which uses a metric defined on the solution space. The technique utilises the Generalised Shepard Nearest Neighbour (GSNN) algorithm to estimate nominal or real valued solutions in case bases with solution space metrics. An overview of GSNN and a generalised reduction technique, which subsumes some existing decremental methods, such as the Shrink algorithm, are presented. The reduction technique is given for case bases in terms of a measure of the importance of each case to the predictive power of the case base. A trial test is performed on two case bases of different kinds, with several metrics proposed in the solution space. The tests show that GSNN can out-perform standard nearest neighbour methods on this set. Further test results show that a caseremoval order proposed based on a GSNN error function can produce a sparse case base with good predictive power.
Resumo:
Previous papers have noted the difficulty in obtaining neural models which are stable under simulation when trained using prediction-error-based methods. Here the differences between series-parallel and parallel identification structures for training neural models are investigated. The effect of the error surface shape on training convergence and simulation performance is analysed using a standard algorithm operating in both training modes. A combined series-parallel/parallel training scheme is proposed, aiming to provide a more effective means of obtaining accurate neural simulation models. Simulation examples show the combined scheme is advantageous in circumstances where the solution space is known or suspected to be complex. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Background: A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN) from transcript profiling data. Results: The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting) problem and solved finally by formulating a Linear Program (LP). A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM) initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known regulatory associations. In each S. cerevisiae LP-SLGN, the number of nodes with a particular degree follows an approximate power law suggesting that its degree distributions is similar to that observed in real-world networks. Inspection of these LP-SLGNs suggests biological hypotheses amenable to experimental verification. Conclusion: A statistically robust and computationally efficient LP-based method for estimating the topology of a large sparse undirected graph from high-dimensional data yields representations of genetic networks that are biologically plausible and useful abstractions of the structures of real genetic networks. Analysis of the statistical and topological properties of learned LP-SLGNs may have practical value; for example, genes with high random walk betweenness, a measure of the centrality of a node in a graph, are good candidates for intervention studies and hence integrated computational – experimental investigations designed to infer more realistic and sophisticated probabilistic directed graphical model representations of genetic networks. The LP-based solutions of the sparse linear regression problem described here may provide a method for learning the structure of transcription factor networks from transcript profiling and transcription factor binding motif data.
Resumo:
The numerical solution in one space dimension of advection--reaction--diffusion systems with nonlinear source terms may invoke a high computational cost when the presently available methods are used. Numerous examples of finite volume schemes with high order spatial discretisations together with various techniques for the approximation of the advection term can be found in the literature. Almost all such techniques result in a nonlinear system of equations as a consequence of the finite volume discretisation especially when there are nonlinear source terms in the associated partial differential equation models. This work introduces a new technique that avoids having such nonlinear systems of equations generated by the spatial discretisation process when nonlinear source terms in the model equations can be expanded in positive powers of the dependent function of interest. The basis of this method is a new linearisation technique for the temporal integration of the nonlinear source terms as a supplementation of a more typical finite volume method. The resulting linear system of equations is shown to be both accurate and significantly faster than methods that necessitate the use of solvers for nonlinear system of equations.
Resumo:
A new structured discretization of 2D space, named X-discretization, is proposed to solve bivariate population balance equations using the framework of minimal internal consistency of discretization of Chakraborty and Kumar [2007, A new framework for solution of multidimensional population balance equations. Chem. Eng. Sci. 62, 4112-4125] for breakup and aggregation of particles. The 2D space of particle constituents (internal attributes) is discretized into bins by using arbitrarily spaced constant composition radial lines and constant mass lines of slope -1. The quadrilaterals are triangulated by using straight lines pointing towards the mean composition line. The monotonicity of the new discretization makes is quite easy to implement, like a rectangular grid but with significantly reduced numerical dispersion. We use the new discretization of space to automate the expansion and contraction of the computational domain for the aggregation process, corresponding to the formation of larger particles and the disappearance of smaller particles by adding and removing the constant mass lines at the boundaries. The results show that the predictions of particle size distribution on fixed X-grid are in better agreement with the analytical solution than those obtained with the earlier techniques. The simulations carried out with expansion and/or contraction of the computational domain as population evolves show that the proposed strategy of evolving the computational domain with the aggregation process brings down the computational effort quite substantially; larger the extent of evolution, greater is the reduction in computational effort. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The solution structure of the monomeric glutamine amidotransferase (GATase) subunit of the Methanocaldococcus janaschii (Mj) guanosine monophosphate synthetase (GMPS) has been determined using high-resolution nuclear magnetic resonance methods. Gel filtration chromatography and N-15 backbone relaxation studies have shown that the Mj GATase subunit is present in solution as a 21 kDa (188-residue) monomer. The ensemble of 20 lowest-energy structures showed root-mean-square deviations of 0.35 +/- 0.06 angstrom for backbone atoms and 0.8 +/- 0.06 angstrom for all heavy atoms. Furthermore, 99.4% of the backbone dihedral angles are present in the allowed region of the Ramachandran map, indicating the stereochemical quality of the structure. The core of the tertiary structure of the GATase is composed of a seven-stranded mixed beta-sheet that is fenced by five alpha-helices. The Mj GATase is similar in structure to the Pyrococcus horikoshi (Ph) GATase subunit. Nuclear magnetic resonance (NMR) chemical shift perturbations and changes in line width were monitored to identify residues on GATase that were responsible for interaction with magnesium and the ATPPase subunit, respectively. These interaction studies showed that a common surface exists for the metal ion binding as well as for the protein-protein interaction. The dissociation constant for the GATase-Mg2+ interaction has been found to be similar to 1 mM, which implies that interaction is very weak and falls in the fast chemical exchange regime. The GATase-ATPPase interaction, on the other hand, falls in the intermediate chemical exchange regime on the NMR time scale. The implication of this interaction in terms of the regulation of the GATase activity of holo GMPS is discussed.
Resumo:
Traditionally, the Internet provides only a “best-effort” service, treating all packets going to the same destination equally. However, providing differentiated services for different users based on their quality requirements is increasingly becoming a demanding issue. For this, routers need to have the capability to distinguish and isolate traffic belonging to different flows. This ability to determine the flow each packet belongs to is called packet classification. Technology vendors are reluctant to support algorithmic solutions for classification due to their non-deterministic performance. Although CAMs are favoured by technology vendors due to their deterministic high lookup rates, they suffer from the problems of high power dissipation and high silicon cost. This paper provides a new algorithmic-architectural solution for packet classification that mixes CAMs with algorithms based on multi-level cutting the classification space into smaller spaces. The provided solution utilizes the geometrical distribution of rules in the classification space. It provides the deterministic performance of CAMs, support for dynamic updates, and added flexibility for system designers.