946 resultados para Fundamentals in linear algebra
Resumo:
The processing of biological motion is a critical, everyday task performed with remarkable efficiency by human sensory systems. Interest in this ability has focused to a large extent on biological motion processing in the visual modality (see, for example, Cutting, J. E., Moore, C., & Morrison, R. (1988). Masking the motions of human gait. Perception and Psychophysics, 44(4), 339-347). In naturalistic settings, however, it is often the case that biological motion is defined by input to more than one sensory modality. For this reason, here in a series of experiments we investigate behavioural correlates of multisensory, in particular audiovisual, integration in the processing of biological motion cues. More specifically, using a new psychophysical paradigm we investigate the effect of suprathreshold auditory motion on perceptions of visually defined biological motion. Unlike data from previous studies investigating audiovisual integration in linear motion processing [Meyer, G. F. & Wuerger, S. M. (2001). Cross-modal integration of auditory and visual motion signals. Neuroreport, 12(11), 2557-2560; Wuerger, S. M., Hofbauer, M., & Meyer, G. F. (2003). The integration of auditory and motion signals at threshold. Perception and Psychophysics, 65(8), 1188-1196; Alais, D. & Burr, D. (2004). No direction-specific bimodal facilitation for audiovisual motion detection. Cognitive Brain Research, 19, 185-194], we report the existence of direction-selective effects: relative to control (stationary) auditory conditions, auditory motion in the same direction as the visually defined biological motion target increased its detectability, whereas auditory motion in the opposite direction had the inverse effect. Our data suggest these effects do not arise through general shifts in visuo-spatial attention, but instead are a consequence of motion-sensitive, direction-tuned integration mechanisms that are, if not unique to biological visual motion, at least not common to all types of visual motion. Based on these data and evidence from neurophysiological and neuroimaging studies we discuss the neural mechanisms likely to underlie this effect.
Resumo:
BACKGROUND: Three non-synonymous single nucleotide polymorphisms (Q223R, K109R and K656N) of the leptin receptor gene (LEPR) have been tested for association with obesity-related outcomes in multiple studies, showing inconclusive results. We performed a systematic review and meta-analysis on the association of the three LEPR variants with BMI. In addition, we analysed 15 SNPs within the LEPR gene in the CoLaus study, assessing the interaction of the variants with sex. METHODOLOGY/PRINCIPAL FINDINGS: We searched electronic databases, including population-based studies that investigated the association between LEPR variants Q223R, K109R and K656N and obesity- related phenotypes in healthy, unrelated subjects. We furthermore performed meta-analyses of the genotype and allele frequencies in case-control studies. Results were stratified by SNP and by potential effect modifiers. CoLaus data were analysed by logistic and linear regressions and tested for interaction with sex. The meta-analysis of published data did not show an overall association between any of the tested LEPR variants and overweight. However, the choice of a BMI cut-off value to distinguish cases from controls was crucial to explain heterogeneity in Q223R. Differences in allele frequencies across ethnic groups are compatible with natural selection of derived alleles in Q223R and K109R and of the ancient allele in K656N in Asians. In CoLaus, the rs10128072, rs3790438 and rs3790437 variants showed interaction with sex for their association with overweight, waist circumference and fat mass in linear regressions. CONCLUSIONS: Our systematic review and analysis of primary data from the CoLaus study did not show an overall association between LEPR SNPs and overweight. Most studies were underpowered to detect small effect sizes. A potential effect modification by sex, population stratification, as well as the role of natural selection should be addressed in future genetic association studies.
Resumo:
This paper fills a gap in the existing literature on least squareslearning in linear rational expectations models by studying a setup inwhich agents learn by fitting ARMA models to a subset of the statevariables. This is a natural specification in models with privateinformation because in the presence of hidden state variables, agentshave an incentive to condition forecasts on the infinite past recordsof observables. We study a particular setting in which it sufficesfor agents to fit a first order ARMA process, which preserves thetractability of a finite dimensional parameterization, while permittingconditioning on the infinite past record. We describe how previousresults (Marcet and Sargent [1989a, 1989b] can be adapted to handlethe convergence of estimators of an ARMA process in our self--referentialenvironment. We also study ``rates'' of convergence analytically and viacomputer simulation.
Resumo:
We argue that during the crystallization of common and civil law in the 19th century, the optimal degree of discretion in judicial rulemaking, albeit influenced by the comparative advantages of both legislative and judicial rulemaking, was mainly determined by the anti-market biases of the judiciary. The different degrees of judicial discretion adopted in both legal traditions were thus optimally adapted to different circumstances, mainly rooted in the unique, market-friendly, evolutionary transition enjoyed by English common law as opposed to the revolutionary environment of the civil law. On the Continent, constraining judicial discretion was essential for enforcing freedom of contract and establishing a market economy. The ongoing debasement of pro-market fundamentals in both branches of the Western legal system is explained from this perspective as a consequence of increased perceptions of exogenous risks and changes in the political system, which favored the adoption of sharing solutions and removed the cognitive advantage of parliaments and political leaders.
Resumo:
This paper considers a general and informationally efficient approach to determine the optimal access pricing rule for interconnected networks. It shows that there exists a simple rule that achieves the Ramsey outcome as the unique equilibrium when networks compete in linear prices without network-based price discrimination. The approach is informationally efficient in the sense that the regulator is required to know only the marginal cost structure, i.e. the marginal cost of making and terminating a call. The approach is general in that access prices can depend not only on the marginal costs but also on the retail prices, which can be observed by consumers and therefore by the regulator as well. In particular, I consider the set of linear access pricing rules which includes any fixed access price, the Efficient Component Pricing Rule (ECPR) and the Modified ECPR as special cases. I show that in this set, there is a unique rule that implements the Ramsey outcome as the unique equilibrium independently of the underlying demand conditions.
Resumo:
We consider the joint visualization of two matrices which have common rowsand columns, for example multivariate data observed at two time pointsor split accord-ing to a dichotomous variable. Methods of interest includeprincipal components analysis for interval-scaled data, or correspondenceanalysis for frequency data or ratio-scaled variables on commensuratescales. A simple result in matrix algebra shows that by setting up thematrices in a particular block format, matrix sum and difference componentscan be visualized. The case when we have more than two matrices is alsodiscussed and the methodology is applied to data from the InternationalSocial Survey Program.
Resumo:
We study a retail benchmarking approach to determine access prices for interconnected networks. Instead of considering fixed access charges as in the existing literature, we study access pricing rules that determine the access price that network i pays to network j as a linear function of the marginal costs and the retail prices set by both networks. In the case of competition in linear prices, we show that there is a unique linear rule that implements the Ramsey outcome as the unique equilibrium, independently of the underlying demand conditions. In the case of competition in two-part tariffs, we consider a class of access pricing rules, similar to the optimal one under linear prices but based on average retail prices. We show that firms choose the variable price equal to the marginal cost under this class of rules. Therefore, the regulator (or the competition authority) can choose one among the rules to pursue additional objectives such as consumer surplus, network coverage or investment: for instance, we show that both static and dynamic e±ciency can be achieved at the same time.
Resumo:
Iowa ended its third year of a moderate economic recovery as fiscal year 2012 came to a close. Though many of the fundamentals in the state’s economy reflected strength during the year, employment had not returned to its pre-recession level, and job growth remained tepid. Furthermore, there was a distinct dichotomy in where hiring occurred. Most of the state’s job growth was concentrated in the goods-producing industries of construction and manufacturing, while the service-providing industries showed little momentum except for healthcare. Within the manufacturing sector, machinery products was one of the state’s fastest-growing subsectors in 2011, accounting for the creation of several thousand higher-paying jobs. The state’s nonfarm employment advanced by 12,200 in FY 2012 led primarily by growth in manufacturing and construction, which were up 9,900 and 3,800, respectively. Healthcare was the strongest of the service-providing industries with an annual gain of 2,600 jobs, while government continued to be the biggest drag on the statewide economy. Although all three levels of government employment dropped from one year ago, state government lost the most jobs at 1,900.
Resumo:
Planning with partial observability can be formulated as a non-deterministic search problem in belief space. The problem is harder than classical planning as keeping track of beliefs is harder than keeping track of states, and searching for action policies is harder than searching for action sequences. In this work, we develop a framework for partial observability that avoids these limitations and leads to a planner that scales up to larger problems. For this, the class of problems is restricted to those in which 1) the non-unary clauses representing the uncertainty about the initial situation are nvariant, and 2) variables that are hidden in the initial situation do not appear in the body of conditional effects, which are all assumed to be deterministic. We show that such problems can be translated in linear time into equivalent fully observable non-deterministic planning problems, and that an slight extension of this translation renders the problem solvable by means of classical planners. The whole approach is sound and complete provided that in addition, the state-space is connected. Experiments are also reported.
Resumo:
Background:Type 2 diabetes (T2D) is associated with increased fracture risk but paradoxically greater BMD. TBS (trabecular bone score), a novel grey-level texture measurement extracted from DXA images, correlates with 3D parameters of bone micro-architecture. We evaluated the ability of lumbar spine (LS) TBS to account for the increased fracture risk in diabetes. Methods:29,407 women ≥50 years at the time of baseline hip and spine DXA were identified from a database containing all clinical BMD results for the Province of Manitoba, Canada. 2,356 of the women satisfied a well-validated definition for diabetes, the vast majority of whom (>90%) would have T2D. LS L14 TBS was derived for each spine DXA examination blinded to clinical parameters and outcomes. Health service records were assessed for incident non-traumatic major osteoporotic fracture codes (mean follow-up 4.7 years). Results:In linear regression adjusted for FRAX risk factors (age,BMI, glucocorticoids, prior major fracture, rheumatoid arthritis, COPD as a smoking proxy, alcohol abuse) and osteoporosis therapy, diabetes was associated with higher BMD for LS, femoral neck and total hip but lower LS TBS (all p<0.001). Similar results were seen after excluding obese subjects withBMI>30. In logistic regression (Figure), the adjusted odds ratio (OR) for a skeletal measurement in the lowest vs highest tertile was less than 1 for all BMD measurements but increased for LS TBS (adjusted OR 2.61, 95%CI 2.30-2.97). Major osteoporotic fractures were identified in 175 (7.4%) with and 1,493 (5.5%) without diabetes (p < 0.001). LS TBS predicted fractures in those with diabetes (adjusted HR 1.27, 95%CI 1.10-1.46) and without diabetes (HR 1.31, 95%CI 1.24-1.38). LS TBS was an independent predictor of fracture (p<0.05) when further adjusted for BMD (LS, femoral neck or total hip). The explanatory effect of diabetes in the fracture prediction model was greatly reduced when LS TBS was added to the model (indicating that TBS captured a large portion of the diabetes-associated risk), but was paradoxically increased from adding any of the BMD measurements. Conclusions:Lumbar spine TBS is sensitive to skeletal deterioration in postmenopausal women with diabetes, whereas BMD is paradoxically greater. LS TBS predicts osteoporotic fractures in those with diabetes, and captures a large portion of the diabetes-associated fracture risk. Combining LS TBS with BMD incrementally improves fracture prediction.
Resumo:
Työn tavoitteena oli löytää painelajittimen roottorin keskeiset muuttujat ja ajoparametrit, kun pyritään jälkilajittelemaan valkaistua sellumassaa korkeassa sakeudessa, sekä näiden vaikutukset. Tavoitteena oli teknisesti onnistunut lajittelu korkeassa sakeudessa siten, että laitteen puhdistustehokkuus on hyvä, ominaisenergian kulutus on pieni ja kapasiteetti on korkea. Ensin tarkasteltiin kuitenkin teoreettisesti valkaistun sellumassan painelajittelun ongelmakenttää, keskeisten epäpuhtauksien poistoa jälkilajittelussa ja vertailtiin korkea- ja matalasakeuslajittelua. Lisäksi esiteltiin yleisellä tasolla koesuunnittelumenetelmien periaatteita ja menetelmiä sekä analyysiä so. mallinnus. Tätä taustaa vasten haluttiin myös kartoittaa ja esitellä painelajittimen keskeinen muuttujakenttä. Tämän jälkeen selvitettiin kokeellisesti miten ajettavan valkaistun mäntysellumassan sakeus, roottorin kehänopeus ja roottorin rakenteen muutokset vaikuttavat painelajittimen kapasiteettiin, rejektin sakeutumiskertoimeen, ominaisenergian kulutukseen ja puhdistustehokkuuteen. Vaikutusten ja parhaan roottorirakenteen sekä ajosuureiden määritys suoritettiin mallintamalla mittaustulokset lineaarisella regressioanalyysilla. Saatiin tärkeimmät vasteisiin vaikuttavat riippumattomat muuttujat ja niiden matemaattiset esityk-set, malliyhtälöt. Mallinnusta hyväksi käyttäen tarkasteltiin vielä erikseen yhden roottorin ajoa. Tärkeimpinä tuloksina saatiin selville, että puhdistustehokkuus on lähes vakio tietyllä roottorirakenteella riippumatta sakeudesta ja roottorin kehänopeudesta. Edullista puhdistustehokkuuden kannalta on käyttää ensisijaisesti isoa roottorin palaelementin korkeutta ja välystä suhteessa sihtirumpuun. Jälkilajittelu painelajittimella korkeassa sakeudessa kannattaa suorittaa pienellä roottorin kehänopeudella, runko- ja palaelementtivälyksellä suhteessa sihtirumpuun sekä suurella palaelementin korkeudella. Tällöin saavutetaan hyvä kompromissi ominaisenergian kulutuksen ja laitteen ajettavuuden osalta. Iso elementin korkeus ja pieni elementtivälys eivät aja massaa tehokkaasti laitteen alaosaan. Tällöin ei pyöritetä myöskään suurta massamäärää. Pieni elementtivälys mahdollistaa myös massan tehokkaamman leikkauksen, joka parantaa fluidisoitumista. Kitkavoimat ovat luonnollisesti ko. olosuhteissa suuremmat, joten ominais-energian kulutus kasvaa jonkin verran. Ratkaisevinta ominaisenergian kulutuksen kannalta on kuitenkin ajaa laitetta pienellä kehänopeudella.
Resumo:
Fuzzy subsets and fuzzy subgroups are basic concepts in fuzzy mathematics. We shall concentrate on fuzzy subgroups dealing with some of their algebraic, topological and complex analytical properties. Explorations are theoretical belonging to pure mathematics. One of our ideas is to show how widely fuzzy subgroups can be used in mathematics, which brings out the wealth of this concept. In complex analysis we focus on Möbius transformations, combining them with fuzzy subgroups in the algebraic and topological sense. We also survey MV spaces with or without a link to fuzzy subgroups. Spectral space is known in MV algebra. We are interested in its topological properties in MV-semilinear space. Later on, we shall study MV algebras in connection with Riemann surfaces. In fact, the Riemann surface as a concept belongs to complex analysis. On the other hand, Möbius transformations form a part of the theory of Riemann surfaces. In general, this work gives a good understanding how it is possible to fit together different fields of mathematics.
Resumo:
The singular properties of hydrogenated amorphous carbon (a-C:H) thin filmsdeposited by pulsed DC plasma enhanced chemical vapor deposition (PECVD), such as hardness and wear resistance, make it suitable as protective coating with low surface energy for self-assembly applications. In this paper, we designed fluorine-containing a-C:H (a-C:H:F) nanostructured surfaces and we characterized them for self-assembly applications. Sub-micron patterns were generated on silicon through laser lithography while contact angle measurements, nanotribometer, atomic force microscopy (AFM), and scanning electron microscopy (SEM) were used to characterize the surface. a-C:H:F properties on lithographied surfaces such as hydrophobicity and friction were improved with the proper relative quantity of CH4 and CHF3 during deposition, resulting in ultrahydrophobic samples and low friction coefficients. Furthermore, these properties were enhanced along the direction of the lithographypatterns (in-plane anisotropy). Finally, self-assembly properties were tested with silicananoparticles, which were successfully assembled in linear arrays following the generated patterns. Among the main applications, these surfaces could be suitable as particle filter selector and cell colony substrate.
Resumo:
Today´s organizations must have the ability to react to rapid changes in the market. These rapid changes cause pressure to continuously find new efficient ways to organize work practices. Increased competition requires businesses to become more effective and to pay attention to quality of management and to make people to understand their work's impact on the final result. The fundamentals in continmuois improvement are systematic and agile tackling of indentified individual process constraints and the fact tha nothin finally improves without changes. Successful continuous improvement requires management commitment, education, implementation, measurement, recognition and regeneration. These ingredients form the foundation, both for breakthrough projects and small step ongoing improvement activities. One part of the organization's management system are the quality tools, which provide systematic methodologies for identifying problems, defining their root causes, finding solutions, gathering and sorting of data, supporting decision making and implementing the changes, and many other management tasks. Organizational change management includes processes and tools for managing the people in an organizational level change. These tools include a structured approach, which can be used for effective transition of organizations through change. When combined with the understanding of change management of individuals, these tools provide a framework for managing people in change,
Resumo:
Matalaenergiarakentaminen asettaa uudenlaisia haasteita ja mahdollisuuksia lämpöenergian tuotannolle. Lämmitysjärjestelmien mitoitustehot eivät laske samassa suhteessa kuin lämmitysenergiankulutus, mikä suosii alhaisia investointeja muuttuvien kulujen kustannuksella. Työssä tutkittiin viittä vaihtoehtoista tapaa tuottaa kohdealueen rakennuskannan vuotuinen lämpöenergiantarve. Kohdealue koostui pääasiallisesti matalaenergiakerrostaloista. Neljä vaihtoehtoa perustui kaukolämpöön ja yksi matalaenergiaverkkoon varustettuna kiinteistökohtaisilla lämpöpumpuilla. Lähialueen jätevedenpuhdistamolle sijoitettu keskitetty lämpöpumppuratkaisu muodostui kokonaiskustannuksiltaan edullisimmaksi vaihtoehdoksi tuottaa kohdealueen rakennuskannan lämpöenergiantarve. Haketta polttoaineenaan käyttävä pien-CHPlaitos omasi vastaavasti pienimmän hiilijalanjäljen, mutta oli kustannusrakenteeltaan epäedullinen. Kohdealue ja vaihtoehtoiset lämmitysjärjestelmät mallinnettiin GaBi 4.3 elinkaarimallinnusohjelmistolla vaihtoehtojen hiilijalanjälkien selvittämiseksi.