540 resultados para refining
Resumo:
Designers of self-adaptive systems often formulate adaptive design decisions, making unrealistic or myopic assumptions about the system's requirements and environment. The decisions taken during this formulation are crucial for satisfying requirements. In environments which are characterized by uncertainty and dynamism, deviation from these assumptions is the norm and may trigger 'surprises'. Our method allows designers to make explicit links between the possible emergence of surprises, risks and design trade-offs. The method can be used to explore the design decisions for self-adaptive systems and choose among decisions that better fulfil (or rather partially fulfil) non-functional requirements and address their trade-offs. The analysis can also provide designers with valuable input for refining the adaptation decisions to balance, for example, resilience (i.e. Satisfiability of non-functional requirements and their trade-offs) and stability (i.e. Minimizing the frequency of adaptation). The objective is to provide designers of self adaptive systems with a basis for multi-dimensional what-if analysis to revise and improve the understanding of the environment and its effect on non-functional requirements and thereafter decision-making. We have applied the method to a wireless sensor network for flood prediction. The application shows that the method gives rise to questions that were not explicitly asked before at design-time and assists designers in the process of risk-aware, what-if and trade-off analysis.
Resumo:
On the basis of topical investigations on the reflection in the mathematics education, in this article there are presented some contemporary ideas about refining the methodology of mastering knowledge and skills for solving mathematical problems. The thesis is developed that for the general logical and for some particular mathematical methods to become means of solving mathematical problems, first they need to be a purpose of the education.
Resumo:
The main focus of this paper is on mathematical theory and methods which have a direct bearing on problems involving multiscale phenomena. Modern technology is refining measurement and data collection to spatio-temporal scales on which observed geophysical phenomena are displayed as intrinsically highly variable and intermittant heirarchical structures,e.g. rainfall, turbulence, etc. The heirarchical structure is reflected in the occurence of a natural separation of scales which collectively manifest at some basic unit scale. Thus proper data analysis and inference require a mathematical framework which couples the variability over multiple decades of scale in which basic theoretical benchmarks can be identified and calculated. This continues the main theme of the research in this area of applied probability over the past twenty years.
Resumo:
2000 Mathematics Subject Classification: 62J05, 62J10, 62F35, 62H12, 62P30.
Resumo:
A tanulmány egy nemrég lezárt TÁMOP-kutatás keretébe illeszkedik.1 A kutatás célja, hogy átfogó képet nyújtson az innovációról, tisztázza az innovációval kapcsolatos fogalmakat, bemutassa az innovációs trendeket és – egy empirikus felmérés nyomán – számot adjon arról, hogy ezek hogyan jelentkeznek Magyarország sajátos viszonyai között. Az innovációs fogalomkör megvilágításához hozzátartozik a tanulás és az innováció kapcsolatának tisztázása. Bár a témáról a menedzsment-szakirodalomban számtalan mű jelenik meg, a szerző saját diszciplináris elkötelezettségének megfelelően közgazdasági nézőpontból igyekszik megközelíteni a problémakört. Végül megkísérli Magyarországot elhelyezni a tanulás nemzetközi térképén. _______ This research aims to provide a comprehensive view on innovation, to clarify innovation concepts, to present innovation trends, and – based on an empirical survey – to give an account about how they appear specific conditions of Hungary. The refining of innovation category includes clarification of the relationship between learning and innovation. While many works have been published in this topic of management literature, the author in accordance his own disciplinary engagement tries to approach the problem area from economic point of view. Finally he attempts to place Hungary on the international map of learning.
Resumo:
Modern software systems are often large and complicated. To better understand, develop, and manage large software systems, researchers have studied software architectures that provide the top level overall structural design of software systems for the last decade. One major research focus on software architectures is formal architecture description languages, but most existing research focuses primarily on the descriptive capability and puts less emphasis on software architecture design methods and formal analysis techniques, which are necessary to develop correct software architecture design. ^ Refinement is a general approach of adding details to a software design. A formal refinement method can further ensure certain design properties. This dissertation proposes refinement methods, including a set of formal refinement patterns and complementary verification techniques, for software architecture design using Software Architecture Model (SAM), which was developed at Florida International University. First, a general guideline for software architecture design in SAM is proposed. Second, specification construction through property-preserving refinement patterns is discussed. The refinement patterns are categorized into connector refinement, component refinement and high-level Petri nets refinement. These three levels of refinement patterns are applicable to overall system interaction, architectural components, and underlying formal language, respectively. Third, verification after modeling as a complementary technique to specification refinement is discussed. Two formal verification tools, the Stanford Temporal Prover (STeP) and the Simple Promela Interpreter (SPIN), are adopted into SAM to develop the initial models. Fourth, formalization and refinement of security issues are studied. A method for security enforcement in SAM is proposed. The Role-Based Access Control model is formalized using predicate transition nets and Z notation. The patterns of enforcing access control and auditing are proposed. Finally, modeling and refining a life insurance system is used to demonstrate how to apply the refinement patterns for software architecture design using SAM and how to integrate the access control model. ^ The results of this dissertation demonstrate that a refinement method is an effective way to develop a high assurance system. The method developed in this dissertation extends existing work on modeling software architectures using SAM and makes SAM a more usable and valuable formal tool for software architecture design. ^
Resumo:
I conducted this study to provide insights toward deepening understanding of association between culture and writing by building, assessing, and refining a conceptual model of second language writing. To do this, I examined culture and coherence as well as the relationship between them through a mixed methods research design. Coherence has been an important and complex concept in ESL/EFL writing. I intended to study the concept of coherence in the research context of contrastive rhetoric, comparing the coherence quality in argumentative essays written by undergraduates in Mainland China and their U.S. peers. In order to analyze the complex concept of coherence, I synthesized five linguistic theories of coherence: Halliday and Hasan's cohesion theory, Carroll's theory of coherence, Enkvist's theory of coherence, Topical Structure Analysis, and Toulmin's Model. Based upon the synthesis, 16 variables were generated. Across these 16 variables, Hotelling t-test statistical analysis was conducted to predict differences in argumentative coherence between essays written by two groups of participants. In order to complement the statistical analysis, I conducted 30 interviews of the writers in the studies. Participants' responses were analyzed with open and axial coding. By analyzing the empirical data, I refined the conceptual model by adding more categories and establishing associations among them. The study found that U.S. students made use of more pronominal reference. Chinese students adopted more lexical devices of reiteration and extended paralleling progression. The interview data implied that the difference may be associated with the difference in linguistic features and rhetorical conventions in Chinese and English. As far as Toulmin's Model is concerned, Chinese students scored higher on data than their U.S. peers. According to the interview data, this may be due to the fact that Toulmin's Model, modified as three elements of arguments, have been widely and long taught in Chinese writing instruction while U.S. interview participants said that they were not taught to write essays according to Toulmin's Model. Implications were generated from the process of textual data analysis and the formulation of structural model defining coherence. These implications were aimed at informing writing instruction, assessment, peer-review, and self-revision.
Resumo:
Modern IT infrastructures are constructed by large scale computing systems and administered by IT service providers. Manually maintaining such large computing systems is costly and inefficient. Service providers often seek automatic or semi-automatic methodologies of detecting and resolving system issues to improve their service quality and efficiency. This dissertation investigates several data-driven approaches for assisting service providers in achieving this goal. The detailed problems studied by these approaches can be categorized into the three aspects in the service workflow: 1) preprocessing raw textual system logs to structural events; 2) refining monitoring configurations for eliminating false positives and false negatives; 3) improving the efficiency of system diagnosis on detected alerts. Solving these problems usually requires a huge amount of domain knowledge about the particular computing systems. The approaches investigated by this dissertation are developed based on event mining algorithms, which are able to automatically derive part of that knowledge from the historical system logs, events and tickets. ^ In particular, two textual clustering algorithms are developed for converting raw textual logs into system events. For refining the monitoring configuration, a rule based alert prediction algorithm is proposed for eliminating false alerts (false positives) without losing any real alert and a textual classification method is applied to identify the missing alerts (false negatives) from manual incident tickets. For system diagnosis, this dissertation presents an efficient algorithm for discovering the temporal dependencies between system events with corresponding time lags, which can help the administrators to determine the redundancies of deployed monitoring situations and dependencies of system components. To improve the efficiency of incident ticket resolving, several KNN-based algorithms that recommend relevant historical tickets with resolutions for incoming tickets are investigated. Finally, this dissertation offers a novel algorithm for searching similar textual event segments over large system logs that assists administrators to locate similar system behaviors in the logs. Extensive empirical evaluation on system logs, events and tickets from real IT infrastructures demonstrates the effectiveness and efficiency of the proposed approaches.^
Resumo:
This thesis extended previous research on critical decision making and problem solving by refining and validating a measure designed to assess the use of critical thinking and critical discussion in sociomoral dilemmas. The purpose of this thesis was twofold: 1) to refine the administration of the Critical Thinking Subscale of the CDP to elicit more adequate responses and for purposes of refining the coding and scoring procedures for the total measure, and 2) to collect preliminary data on the initial reliabilities of the measure. Subjects consisted of 40 undergraduate students at Florida International University. Results indicate that the use of longer probes on the Critical Thinking Subscale was more effective in eliciting adequate responses necessary for coding and evaluating the subjects performance. Analyses on the psychometric properties of the measure consisted of test-retest reliability and inter-rater reliability.
Resumo:
This thesis extends previous research on critical decision making and problem-solving by refining and validating a self-report measure designed to assess the use of critical decision making and problem solving in making life choices. The analysis was conducted by performing two studies, and therefore collecting two sets of data on the psychometric properties of the measure. Psychometric analyses included: item analysis, internal consistency reliability, interrater reliability, and an exploratory factor analysis. This study also included regression analysis with the Wonderlic, an established measure of general intelligence, to provide preliminary evidence for the construct validity of the measure.
Resumo:
A review of the literature reveals few research has attempted to demonstrate if a relationship exists between the type of teacher training a science teacher has received and the perceived attitudes of his/her students. Considering that a great deal of time and energy has been devoted by university colleges, school districts, and educators towards refining the teacher education process, it would be more efficient for all parties involved, if research were available that could discern if certain pathways in achieving that education, would promote the tendency towards certain teacher behaviors occurring in the classroom, while other pathways would lead towards different behaviors. Some of the teacher preparation factors examined in this study include the college major chosen by the science teacher, the highest degree earned, the number of years of teaching experience, the type of science course taught, and the grade level taught by the teacher. This study examined how the various factors mentioned, could influence the behaviors which are characteristic of the teacher, and how these behaviors could be reflective in the classroom environment experienced by the students. The instrument used in the study was the Classroom Environment Scale (CES), Real Form. The measured classroom environment was broken down into three separate dimensions, with three components within each dimension in the CES. Multiple Regression statistical analyses examined how components of the teachers' education influenced the perceived dimensions of the classroom environment from the students. The study occurred in Miami-Dade County Florida, with a predominantly urban high school student population. There were 40 secondary science teachers involved, each with an average of 30 students. The total number of students sampled in the study was 1200. The teachers who participated in the study taught the entire range of secondary science courses offered at this large school district. All teachers were selected by the researcher so that a balance would occur in the sample between teachers who were education major versus science major. Additionally, the researcher selected teachers so that a balance occurred in regards to the different levels of college degrees earned among those involved in the study. Several research questions sought to determine if there was significant difference between the type of the educational background obtained by secondary science teachers and the students' perception of the classroom environment. Other research questions sought to determine if there were significant differences in the students' perceptions of the classroom environment for secondary science teachers who taught biological content, or non-biological content sciences. An additional research question sought to evaluate if the grade level taught would affect the students' perception of the classroom environment. Analysis of the multiple regression were run for each of four scores from the CES, Real Form. For score 1, involvement of students, the results showed that teachers with the highest number of years of experience, with masters or masters plus degrees, who were education majors, and who taught twelfth grade students, had greater amounts of students being attentive and interested in class activities, participating in discussions, and doing additional work on their own, as compared with teachers who had lower experience, a bachelors degree, were science majors, and who taught a grade lower than twelfth. For score 2, task orientation, which emphasized completing the required activities and staying on-task, the results showed that teachers with the highest and intermediate experience, a science major, and with the highest college degree, showed higher scores as compared with the teachers indicating lower experiences, education major and a bachelors degree. For Score 3, competition, which indicated how difficult it was to achieve high grades in the class, the results showed that teachers who taught non-biology content subjects had the greatest effect on the regression. Teachers with a masters degree, low levels of experience, and who taught twelfth grade students were also factored into the regression equation. For Score 4, innovation, which indicated the extent in which the teachers used new and innovative techniques to encourage diverse and creative thinking included teachers with an education major as the first entry into the regression equation. Teachers with the least experience (0 to 3 years), and teachers who taught twelfth and eleventh grade students were also included into the regression equation.
Resumo:
During the oil refining process a huge discard volume of water occurs, which carries the contaminants from the process. A class of contaminant compounds resulting from the petrochemical industry are the Polyaromatic Hydrocarbons (PAH's). To evaluate the biodegradation of Dibenzothiophene in refinery water a synthetic wastewater was prepared to be treated using activated sludge. For this, a 2 3 Composite Design (plus 3 central points and six axial points) was carried out. The planning had as independent variables (factors) the initial concentration of DBT, pH and time of biodegradation. Biodegradation of DBT was assayed following the parameters COD, pH, temperature, SS, VSS, FVS, SVI. Concerned to the chromatographic conditions, a methodology was validated in order to verify the presence of DBT and its metabolite, 2-HBF, in the final wastewater treated by activated sludge system using a liquid - liquid extraction coupled to HPLC / UV analysis. The parameters used for validation were DL, QL, linearity, recovery and repeatability. As for optimization, the results indicated that the studied methodology can be used in monitoring the DBT degradation and 2- HBF by activated sludge, as they showed excellent linearity values, coefficients of variation, so as satisfactory recovery percentage. COD reduction efficiency tests showed an average percentage of 64.4%. The increasing trend for the results for the TSS and VSS tests showed that the activated sludge was well tailored. The best operating conditions for the reduction of COD were observed when operated with median concentrations of DBT, a higher time to biodegradation, and pH in both the acidic range as the basic one. The biodegradability of the DBT was confirmed by determining the presence of HBF-2. The highest concentrations of HBF-2 were obtained in extreme concentrations of DBT and pH, and higher biodegradation times.
Desenvolvimento da célula base de microestruturas periódicas de compósitos sob otimização topológica
Resumo:
This thesis develops a new technique for composite microstructures projects by the Topology Optimization process, in order to maximize rigidity, making use of Deformation Energy Method and using a refining scheme h-adaptative to obtain a better defining the topological contours of the microstructure. This is done by distributing materials optimally in a region of pre-established project named as Cell Base. In this paper, the Finite Element Method is used to describe the field and for government equation solution. The mesh is refined iteratively refining so that the Finite Element Mesh is made on all the elements which represent solid materials, and all empty elements containing at least one node in a solid material region. The Finite Element Method chosen for the model is the linear triangular three nodes. As for the resolution of the nonlinear programming problem with constraints we were used Augmented Lagrangian method, and a minimization algorithm based on the direction of the Quasi-Newton type and Armijo-Wolfe conditions assisting in the lowering process. The Cell Base that represents the composite is found from the equivalence between a fictional material and a preescribe material, distributed optimally in the project area. The use of the strain energy method is justified for providing a lower computational cost due to a simpler formulation than traditional homogenization method. The results are presented prescription with change, in displacement with change, in volume restriction and from various initial values of relative densities.
Resumo:
Funded by UK's Biotechnology and Biological Sciences Research Council (BBSRC) Department for Environment, Food and Rural Affairs (DEFRA). Grant Number: LK0863 BBSRC strategic programme Grant on Energy Grasses & Bio-refining. Grant Number: BBS/E/W/10963A01 OPTIMISC. Grant Number: FP7-289159 WATBIO. Grant Number: FP7-311929 Innovate UK/BBSRC ‘MUST’. Grant Number: BB/N016149/1
Resumo:
The realization of an energy future based on safe, clean, sustainable, and economically viable technologies is one of the grand challenges facing modern society. Electrochemical energy technologies underpin the potential success of this effort to divert energy sources away from fossil fuels, whether one considers alternative energy conversion strategies through photoelectrochemical (PEC) production of chemical fuels or fuel cells run with sustainable hydrogen, or energy storage strategies, such as in batteries and supercapacitors. This dissertation builds on recent advances in nanomaterials design, synthesis, and characterization to develop novel electrodes that can electrochemically convert and store energy.
Chapter 2 of this dissertation focuses on refining the properties of TiO2-based PEC water-splitting photoanodes used for the direct electrochemical conversion of solar energy into hydrogen fuel. The approach utilized atomic layer deposition (ALD); a growth process uniquely suited for the conformal and uniform deposition of thin films with angstrom-level thickness precision. ALD’s thickness control enabled a better understanding of how the effects of nitrogen doping via NH3 annealing treatments, used to reduce TiO2’s bandgap, can have a strong dependence on TiO2’s thickness and crystalline quality. In addition, it was found that some of the negative effects on the PEC performance typically associated with N-doped TiO2 could be mitigated if the NH3-annealing was directly preceded by an air-annealing step, especially for ultrathin (i.e., < 10 nm) TiO2 films. ALD was also used to conformally coat an ultraporous conductive fluorine-doped tin oxide nanoparticle (nanoFTO) scaffold with an ultrathin layer of TiO2. The integration of these ultrathin films and the oxide nanoparticles resulted in a heteronanostructure design with excellent PEC water oxidation photocurrents (0.7 mA/cm2 at 0 V vs. Ag/AgCl) and charge transfer efficiency.
In Chapter 3, two innovative nanoarchitectures were engineered in order to enhance the pseudocapacitive energy storage of next generation supercapacitor electrodes. The morphology and quantity of MnO2 electrodeposits was controlled by adjusting the density of graphene foliates on a novel graphenated carbon nanotube (g-CNT) scaffold. This control enabled the nanocomposite supercapacitor electrode to reach a capacitance of 640 F/g, under MnO2 specific mass loading conditions (2.3 mg/cm2) that are higher than previously reported. In the second engineered nanoarchitecture, the electrochemical energy storage properties of a transparent electrode based on a network of solution-processed Cu/Ni cores/shell nanowires (NWs) were activated by electrochemically converting the Ni metal shell into Ni(OH)2. Furthermore, an adjustment of the molar percentage of Ni plated onto the Cu NWs was found to result in a tradeoff between capacitance, transmittance, and stability of the resulting nickel hydroxide-based electrode. The nominal area capacitance and power performance results obtained for this Cu/Ni(OH)2 transparent electrode demonstrates that it has significant potential as a hybrid supercapacitor electrode for integration into cutting edge flexible and transparent electronic devices.