868 resultados para temperature-based models
Resumo:
Early, lesion-based models of language processing suggested that semantic and phonological processes are associated with distinct temporal and parietal regions respectively, with frontal areas more indirectly involved. Contemporary spatial brain mapping techniques have not supported such clear-cut segregation, with strong evidence of activation in left temporal areas by both processes and disputed evidence of involvement of frontal areas in both processes. We suggest that combining spatial information with temporal and spectral data may allow a closer scrutiny of the differential involvement of closely overlapping cortical areas in language processing. Using beamforming techniques to analyze magnetoencephalography data, we localized the neuronal substrates underlying primed responses to nouns requiring either phonological or semantic processing, and examined the associated measures of time and frequency in those areas where activation was common to both tasks. Power changes in the beta (14-30 Hz) and gamma (30-50 Hz) frequency bandswere analyzed in pre-selected time windows of 350-550 and 500-700ms In left temporal regions, both tasks elicited power changes in the same time window (350-550 ms), but with different spectral characteristics, low beta (14-20 Hz) for the phonological task and high beta (20-30 Hz) for the semantic task. In frontal areas (BA10), both tasks elicited power changes in the gamma band (30-50 Hz), but in different time windows, 500-700ms for the phonological task and 350-550ms for the semantic task. In the left inferior parietal area (BA40), both tasks elicited changes in the 20-30 Hz beta frequency band but in different time windows, 350-550ms for the phonological task and 500-700ms for the semantic task. Our findings suggest that, where spatial measures may indicate overlapping areas of involvement, additional beamforming techniques can demonstrate differential activation in time and frequency domains. © 2012 McNab, Hillebrand, Swithenby and Rippon.
Resumo:
Humans consciously and subconsciously establish various links, emerge semantic images and reason in mind, learn linking effect and rules, select linked individuals to interact, and form closed loops through links while co-experiencing in multiple spaces in lifetime. Machines are limited in these abilities although various graph-based models have been used to link resources in the cyber space. The following are fundamental limitations of machine intelligence: (1) machines know few links and rules in the physical space, physiological space, psychological space, socio space and mental space, so it is not realistic to expect machines to discover laws and solve problems in these spaces; and, (2) machines can only process pre-designed algorithms and data structures in the cyber space. They are limited in ability to go beyond the cyber space, to learn linking rules, to know the effect of linking, and to explain computing results according to physical, physiological, psychological and socio laws. Linking various spaces will create a complex space — the Cyber-Physical-Physiological-Psychological-Socio-Mental Environment CP3SME. Diverse spaces will emerge, evolve, compete and cooperate with each other to extend machine intelligence and human intelligence. From multi-disciplinary perspective, this paper reviews previous ideas on various links, introduces the concept of cyber-physical society, proposes the ideal of the CP3SME including its definition, characteristics, and multi-disciplinary revolution, and explores the methodology of linking through spaces for cyber-physical-socio intelligence. The methodology includes new models, principles, mechanisms, scientific issues, and philosophical explanation. The CP3SME aims at an ideal environment for humans to live and work. Exploration will go beyond previous ideals on intelligence and computing.
Resumo:
The behaviour of self adaptive systems can be emergent. The difficulty in predicting the system's behaviour means that there is scope for the system to surprise its customers and its developers. Because its behaviour is emergent, a self-adaptive system needs to garner confidence in its customers and it needs to resolve any surprise on the part of the developer during testing and mainteinance. We believe that these two functions can only be achieved if a self-adaptive system is also capable of self-explanation. We argue a self-adaptive system's behaviour needs to be explained in terms of satisfaction of its requirements. Since self-adaptive system requirements may themselves be emergent, a means needs to be found to explain the current behaviour of the system and the reasons that brought that behaviour about. We propose the use of goal-based models during runtime to offer self-explanation of how a system is meeting its requirements, and why the means of meeting these were chosen. We discuss the results of early experiments in self-explanation, and set out future work. © 2012 C.E.S.A.M.E.S.
Resumo:
Guest editorial Ali Emrouznejad is a Senior Lecturer at the Aston Business School in Birmingham, UK. His areas of research interest include performance measurement and management, efficiency and productivity analysis as well as data mining. He has published widely in various international journals. He is an Associate Editor of IMA Journal of Management Mathematics and Guest Editor to several special issues of journals including Journal of Operational Research Society, Annals of Operations Research, Journal of Medical Systems, and International Journal of Energy Management Sector. He is in the editorial board of several international journals and co-founder of Performance Improvement Management Software. William Ho is a Senior Lecturer at the Aston University Business School. Before joining Aston in 2005, he had worked as a Research Associate in the Department of Industrial and Systems Engineering at the Hong Kong Polytechnic University. His research interests include supply chain management, production and operations management, and operations research. He has published extensively in various international journals like Computers & Operations Research, Engineering Applications of Artificial Intelligence, European Journal of Operational Research, Expert Systems with Applications, International Journal of Production Economics, International Journal of Production Research, Supply Chain Management: An International Journal, and so on. His first authored book was published in 2006. He is an Editorial Board member of the International Journal of Advanced Manufacturing Technology and an Associate Editor of the OR Insight Journal. Currently, he is a Scholar of the Advanced Institute of Management Research. Uses of frontier efficiency methodologies and multi-criteria decision making for performance measurement in the energy sector This special issue aims to focus on holistic, applied research on performance measurement in energy sector management and for publication of relevant applied research to bridge the gap between industry and academia. After a rigorous refereeing process, seven papers were included in this special issue. The volume opens with five data envelopment analysis (DEA)-based papers. Wu et al. apply the DEA-based Malmquist index to evaluate the changes in relative efficiency and the total factor productivity of coal-fired electricity generation of 30 Chinese administrative regions from 1999 to 2007. Factors considered in the model include fuel consumption, labor, capital, sulphur dioxide emissions, and electricity generated. The authors reveal that the east provinces were relatively and technically more efficient, whereas the west provinces had the highest growth rate in the period studied. Ioannis E. Tsolas applies the DEA approach to assess the performance of Greek fossil fuel-fired power stations taking undesirable outputs into consideration, such as carbon dioxide and sulphur dioxide emissions. In addition, the bootstrapping approach is deployed to address the uncertainty surrounding DEA point estimates, and provide bias-corrected estimations and confidence intervals for the point estimates. The author revealed from the sample that the non-lignite-fired stations are on an average more efficient than the lignite-fired stations. Maethee Mekaroonreung and Andrew L. Johnson compare the relative performance of three DEA-based measures, which estimate production frontiers and evaluate the relative efficiency of 113 US petroleum refineries while considering undesirable outputs. Three inputs (capital, energy consumption, and crude oil consumption), two desirable outputs (gasoline and distillate generation), and an undesirable output (toxic release) are considered in the DEA models. The authors discover that refineries in the Rocky Mountain region performed the best, and about 60 percent of oil refineries in the sample could improve their efficiencies further. H. Omrani, A. Azadeh, S. F. Ghaderi, and S. Abdollahzadeh presented an integrated approach, combining DEA, corrected ordinary least squares (COLS), and principal component analysis (PCA) methods, to calculate the relative efficiency scores of 26 Iranian electricity distribution units from 2003 to 2006. Specifically, both DEA and COLS are used to check three internal consistency conditions, whereas PCA is used to verify and validate the final ranking results of either DEA (consistency) or DEA-COLS (non-consistency). Three inputs (network length, transformer capacity, and number of employees) and two outputs (number of customers and total electricity sales) are considered in the model. Virendra Ajodhia applied three DEA-based models to evaluate the relative performance of 20 electricity distribution firms from the UK and the Netherlands. The first model is a traditional DEA model for analyzing cost-only efficiency. The second model includes (inverse) quality by modelling total customer minutes lost as an input data. The third model is based on the idea of using total social costs, including the firm’s private costs and the interruption costs incurred by consumers, as an input. Both energy-delivered and number of consumers are treated as the outputs in the models. After five DEA papers, Stelios Grafakos, Alexandros Flamos, Vlasis Oikonomou, and D. Zevgolis presented a multiple criteria analysis weighting approach to evaluate the energy and climate policy. The proposed approach is akin to the analytic hierarchy process, which consists of pairwise comparisons, consistency verification, and criteria prioritization. In the approach, stakeholders and experts in the energy policy field are incorporated in the evaluation process by providing an interactive mean with verbal, numerical, and visual representation of their preferences. A total of 14 evaluation criteria were considered and classified into four objectives, such as climate change mitigation, energy effectiveness, socioeconomic, and competitiveness and technology. Finally, Borge Hess applied the stochastic frontier analysis approach to analyze the impact of various business strategies, including acquisition, holding structures, and joint ventures, on a firm’s efficiency within a sample of 47 natural gas transmission pipelines in the USA from 1996 to 2005. The author finds that there were no significant changes in the firm’s efficiency by an acquisition, and there is a weak evidence for efficiency improvements caused by the new shareholder. Besides, the author discovers that parent companies appear not to influence a subsidiary’s efficiency positively. In addition, the analysis shows a negative impact of a joint venture on technical efficiency of the pipeline company. To conclude, we are grateful to all the authors for their contribution, and all the reviewers for their constructive comments, which made this special issue possible. We hope that this issue would contribute significantly to performance improvement of the energy sector.
Resumo:
This thesis provides a set of tools for managing uncertainty in Web-based models and workflows.To support the use of these tools, this thesis firstly provides a framework for exposing models through Web services. An introduction to uncertainty management, Web service interfaces,and workflow standards and technologies is given, with a particular focus on the geospatial domain.An existing specification for exposing geospatial models and processes, theWeb Processing Service (WPS), is critically reviewed. A processing service framework is presented as a solutionto usability issues with the WPS standard. The framework implements support for Simple ObjectAccess Protocol (SOAP), Web Service Description Language (WSDL) and JavaScript Object Notation (JSON), allowing models to be consumed by a variety of tools and software. Strategies for communicating with models from Web service interfaces are discussed, demonstrating the difficultly of exposing existing models on the Web. This thesis then reviews existing mechanisms for uncertainty management, with an emphasis on emulator methods for building efficient statistical surrogate models. A tool is developed to solve accessibility issues with such methods, by providing a Web-based user interface and backend to ease the process of building and integrating emulators. These tools, plus the processing service framework, are applied to a real case study as part of the UncertWeb project. The usability of the framework is proved with the implementation of aWeb-based workflow for predicting future crop yields in the UK, also demonstrating the abilities of the tools for emulator building and integration. Future directions for the development of the tools are discussed.
Resumo:
Cleavage by the proteasome is responsible for generating the C terminus of T-cell epitopes. Modeling the process of proteasome cleavage as part of a multi-step algorithm for T-cell epitope prediction will reduce the number of non-binders and increase the overall accuracy of the predictive algorithm. Quantitative matrix-based models for prediction of the proteasome cleavage sites in a protein were developed using a training set of 489 naturally processed T-cell epitopes (nonamer peptides) associated with HLA-A and HLA-B molecules. The models were validated using an external test set of 227 T-cell epitopes. The performance of the models was good, identifying 76% of the C-termini correctly. The best model of proteasome cleavage was incorporated as the first step in a three-step algorithm for T-cell epitope prediction, where subsequent steps predicted TAP affinity and MHC binding using previously derived models.
Resumo:
In recent years, there has been an increas-ing interest in learning a distributed rep-resentation of word sense. Traditional context clustering based models usually require careful tuning of model parame-ters, and typically perform worse on infre-quent word senses. This paper presents a novel approach which addresses these lim-itations by first initializing the word sense embeddings through learning sentence-level embeddings from WordNet glosses using a convolutional neural networks. The initialized word sense embeddings are used by a context clustering based model to generate the distributed representations of word senses. Our learned represen-tations outperform the publicly available embeddings on 2 out of 4 metrics in the word similarity task, and 6 out of 13 sub tasks in the analogical reasoning task.
Resumo:
Astrocytes are now increasingly acknowledged as having fundamental and sophisticated roles in brain function and dysfunction. Unravelling the complex mechanisms that underlie human brain astrocyte-neuron interactions is therefore an essential step on the way to understanding how the brain operates. Insights into astrocyte function to date, have almost exclusively been derived from studies conducted using murine or rodent models. Whilst these have led to significant discoveries, preliminary work with human astrocytes has revealed a hitherto unknown range of astrocyte types with potentially greater functional complexity and increased neuronal interaction with respect to animal astrocytes. It is becoming apparent, therefore, that many important functions of astrocytes will only be discovered by direct physiological interrogation of human astrocytes. Recent advancements in the field of stem cell biology have provided a source of human based models. These will provide a platform to facilitate our understanding of normal astrocyte functions as well as their role in CNS pathology. A number of recent studies have demonstrated that stem cell derived astrocytes exhibit a range of properties, suggesting that they may be functionally equivalent to their in vivo counterparts. Further validation against in vivo models will ultimately confirm the future utility of these stem-cell based approaches in fulfilling the need for human- based cellular models for basic and clinical research. In this review we discuss the roles of astrocytes in the brain and highlight the extent to which human stem cell derived astrocytes have demonstrated functional activities that are equivalent to that observed in vivo.
Resumo:
Contemporary models of contrast integration across space assume that pooling operates uniformly over the target region. For sparse stimuli, where high contrast regions are separated by areas containing no signal, this strategy may be sub-optimal because it pools more noise than signal as area increases. Little is known about the behaviour of human observers for detecting such stimuli. We performed an experiment in which three observers detected regular textures of various areas, and six levels of sparseness. Stimuli were regular grids of horizontal grating micropatches, each 1 cycle wide. We varied the ratio of signals (marks) to gaps (spaces), with mark:space ratios ranging from 1 : 0 (a dense texture with no spaces) to 1 : 24. To compensate for the decline in sensitivity with increasing distance from fixation, we adjusted the stimulus contrast as a function of eccentricity based on previous measurements [Baldwin, Meese & Baker, 2012, J Vis, 12(11):23]. We used the resulting area summation functions and psychometric slopes to test several filter-based models of signal combination. A MAX model failed to predict the thresholds, but did a good job on the slopes. Blanket summation of stimulus energy improved the threshold fit, but did not predict an observed slope increase with mark:space ratio. Our best model used a template matched to the sparseness of the stimulus, and pooled the squared contrast signal over space. Templates for regular patterns have also recently been proposed to explain the regular appearance of slightly irregular textures (Morgan et al, 2012, Proc R Soc B, 279, 2754–2760)
Resumo:
A tanulmány a kockázatnak és a kockázatok felmérésének az éves beszámolók (pénzügyi kimutatások) könyvvizsgálatban betöltött szerepével foglalkozik. A modern könyvvizsgálat – belső és külső korlátainál fogva – nem létezhet a vizsgált vállalkozás üzleti kockázatainak felmérése nélkül. Olyannyira igaz ez, hogy a szakma alapvető szabályait lefektető nemzeti és nemzetközi standardok is kötelező jelleggel előírják az ügyfelek üzleti kockázatainak megismerését. Mindez nem öncélú tevékenység, hanem éppen ez jelenti a könyvvizsgálat kiinduló magját: a kockázatbecslés – a tervezés részeként – az audit végrehajtásának alapja, és egyben vezérfonala. A szerző először bemutatja a könyvvizsgálat és a kockázat kapcsolatának alapvonásait, azt, hogy miként jelenik meg egyáltalán a kockázat problémája a könyvvizsgálatban. Ezt követően a különféle kockázatalapú megközelítéseket tárgyalja, majd néhány főbb elem kiragadásával ábrázolja a kockázatkoncepció beágyazódását a szakmai szabályozásba. Végül – mintegy az elmélet tesztjeként – bemutatja a kockázatmodell gyakorlati alkalmazásának néhány aspektusát. ______ The study examines the role of risk and the assessment of risks in the external audit of financial statements. A modern audit – due to its internal and external limitations – cannot exist without the assessment of the business risk of the entity being audited. This is not a l’art pour l’art activity but rather the very core of the audit. It is – as part of the planning of the audit – a guideline to the whole auditing process. This study has three main sections. The first one explains the connection between audit and risk, the second discusses the different risk based approaches to auditing and the embeddedness of the risk concept into professional regulation. Finally – as a test of theory – some practical aspects of the risk model are discussed through the lens of former empirical research carried out mostly in the US. The conclusion of the study is that though risk based models of auditing have many weaknesses they still result in the most effective and efficient high quality audits.
Resumo:
Research on the adoption of innovations by individuals has been criticized for focusing on various factors that lead to the adoption or rejection of an innovation while ignoring important aspects of the dynamic process that takes place. Theoretical process-based models hypothesize that individuals go through consecutive stages of information gathering and decision making but do not clearly explain the mechanisms that cause an individual to leave one stage and enter the next one. Research on the dynamics of the adoption process have lacked a structurally formal and quantitative description of the process. ^ This dissertation addresses the adoption process of technological innovations from a Systems Theory perspective and assumes that individuals roam through different, not necessarily consecutive, states, determined by the levels of quantifiable state variables. It is proposed that different levels of these state variables determine the state in which potential adopters are. Various events that alter the levels of these variables can cause individuals to migrate into different states. ^ It was believed that Systems Theory could provide the required infrastructure to model the innovation adoption process, particularly applied to information technologies, in a formal, structured fashion. This dissertation assumed that an individual progressing through an adoption process could be considered a system, where the occurrence of different events affect the system's overall behavior and ultimately the adoption outcome. The research effort aimed at identifying the various states of such system and the significant events that could lead the system from one state to another. By mapping these attributes onto an “innovation adoption state space” the adoption process could be fully modeled and used to assess the status, history, and possible outcomes of a specific adoption process. ^ A group of Executive MBA students were observed as they adopted Internet-based technological innovations. The data collected were used to identify clusters in the values of the state variables and consequently define significant system states. Additionally, events were identified across the student sample that systematically moved the system from one state to another. The compilation of identified states and change-related events enabled the definition of an innovation adoption state-space model. ^
Resumo:
Limited literature regarding parameter estimation of dynamic systems has been identified as the central-most reason for not having parametric bounds in chaotic time series. However, literature suggests that a chaotic system displays a sensitive dependence on initial conditions, and our study reveals that the behavior of chaotic system: is also sensitive to changes in parameter values. Therefore, parameter estimation technique could make it possible to establish parametric bounds on a nonlinear dynamic system underlying a given time series, which in turn can improve predictability. By extracting the relationship between parametric bounds and predictability, we implemented chaos-based models for improving prediction in time series. ^ This study describes work done to establish bounds on a set of unknown parameters. Our research results reveal that by establishing parametric bounds, it is possible to improve the predictability of any time series, although the dynamics or the mathematical model of that series is not known apriori. In our attempt to improve the predictability of various time series, we have established the bounds for a set of unknown parameters. These are: (i) the embedding dimension to unfold a set of observation in the phase space, (ii) the time delay to use for a series, (iii) the number of neighborhood points to use for avoiding detection of false neighborhood and, (iv) the local polynomial to build numerical interpolation functions from one region to another. Using these bounds, we are able to get better predictability in chaotic time series than previously reported. In addition, the developments of this dissertation can establish a theoretical framework to investigate predictability in time series from the system-dynamics point of view. ^ In closing, our procedure significantly reduces the computer resource usage, as the search method is refined and efficient. Finally, the uniqueness of our method lies in its ability to extract chaotic dynamics inherent in non-linear time series by observing its values. ^
Resumo:
Leadership is a socially constructed concept shaped by the context, values and experiences of society (Klenke, 1996); the historical context of gender and ethnicity in society affects views about leadership and who merits a leadership role. Therefore, developing an understanding of Hispanic women students’ leadership identity development is critical in broadening how we define leadership and develop leadership education. The purpose of this qualitative case study was to explore and describe the leadership identity development of a select group of women leaders at a Hispanic Serving Institution (HSI) in the southeast. A psychosocial approach to the study was utilized. In-depth interviews and focus groups were conducted with 11 self-identified Hispanic women students of sophomore, junior or senior standing with varying degrees of involvement in leadership activities at Florida International University. Participants were asked questions related to four topics; (a) leadership, (b) gender, (c) ethnic identity, and (d) influences that contributed to their understanding of self as leader. Five topics emerged from the data presented by the participants’: (a) encouraging relationships, (b) meaningful experiences, (c) self development, (d) the role of gender, and (e) impact of ethnicity. These themes contributed to the leadership identity development of the participants. Findings indicate that leadership identity development for Hispanic women college students at this HSI is complex. The concept of leadership identity development presented in the literature was challenged as findings indicate that the participants’ experiences living and attending a school in a majority-minority city influenced their development of a leadership identity. The data indicate that leadership is not gender or ethnicity neutral as differences exist in expectations of men and women in leadership roles. Gender expectations posed particular challenges for these women student leaders. The prescriptive nature of stage-based models was problematic as findings indicated leadership identity development a complicated and continuing process influenced strongly by relationships and experiences. This study enhanced knowledge of the ways that Hispanic women students become leaders and the influences that shape their leadership experiences which can assist higher education professionals in developing leadership programs and courses that address gender, multiculturalism and awareness of self as leader.
Resumo:
The spatial and temporal distribution of planktonic, sediment-associated and epiphytic diatoms among 58 sites in Biscayne Bay, Florida was examined in order to identify diatom taxa indicative of different salinity and water quality conditions, geographic locations and habitat types. Assessments were made in contrasting wet and dry seasons in order to develop robust assessment models for salinity and water quality for this region. We found that diatom assemblages differed between nearshore and offshore locations, especially during the wet season when salinity and nutrient gradients were steepest. In the dry season, habitat structure was primary determinant of diatom assemblage composition. Among a suite of physicochemical variables, water depth and sediment total phosphorus (STP) were most strongly associated with diatom assemblage composition in the dry season, while salinity and water total phosphorus (TP) were more important in the wet season. We used indicator species analysis (ISA) to identify taxa that were most abundant and frequent at nearshore and offshore locations, in planktonic, epiphytic and benthic habitats and in contrasting salinity and water quality regimes. Because surface water concentrations of salts, total phosphorus, nitrogen (TN) and organic carbon (TOC) are partly controlled by water management in this region, diatom-based models were produced to infer these variables in modern and retrospective assessments of management-driven changes. Weighted averaging (WA) and weighted averaging partial least squares (WA-PLS) regressions produced reliable estimates of salinity, TP, TN and TOC from diatoms (r2 = 0.92, 0.77, 0.77 and 0.71, respectively). Because of their sensitivity to salinity, nutrient and TOC concentrations diatom assemblages should be useful in developing protective nutrient criteria for estuaries and coastal waters of Florida.
Resumo:
We evaluated metacommunity hypotheses of landscape arrangement (indicative of dispersal limitation) and environmental gradients (hydroperiod and nutrients) in structuring macroinvertebrate and fish communities in the southern Everglades. We used samples collected at sites from the eastern boundary of the southern Everglades and from Shark River Slough, to evaluate the role of these factors in metacommunity structure. We used eigenfunction spatial analysis to model community structure among sites and distance-based redundancy analysis to partition the variability in communities between spatial and environmental filters. For most animal communities, hydrological parameters had a greater influence on structure than nutrient enrichment, however both had large effects. The influence of spatial effects indicative of dispersal limitation was weak and only periphyton infauna appeared to be limited by regional dispersal. At the landscape scale, communities were well-mixed, but strongly influenced by hydrology. Local-scale species dominance was influenced by water-permanence and nutrient enrichment. Nutrient enrichment is limited to water inflow points associated with canals, which may explain its impact in this data set. Hydroperiod and nutrient enrichment are controlled by water managers; our analysis indicates that the decisions they make have strong effects on the communities at the base of the Everglades food web.