927 resultados para THRESHOLD SELECTION METHOD


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A series of layered double hydroxides (LDHs) based composites were synthesized by using induced hydrolysis silylation method (IHS), surfactant precursor method, in-situ coprecipitation method, and direct silylation method. Their structures, morphologies, bonding modes and thermal stabilities can be readily adjusted by changing the parameters during preparation and drying processing of the LDHs. The characterization results show that the direct silylation reaction cannot occur between the dried LDHs and 3-aminopropyltriethoxysilane (APS) in an ethanol medium. However, the condensation reaction can proceed with heating process between adsorbed APS and LDHs plates. While using wet state substrates with and without surfactant and ethanol as the solvent, the silylation process can be induced by hydrolysis of APS on the surface of LDHs plates. Surfactants improve the hydrophobicity of the LDHs during the process of nucleation and crystallization, resulting in fluffy shaped crystals; meanwhile, they occupy the surface –OH positions and leave less “free –OH” available for the silylation reaction, favoring formation of silylated products with a higher population in the hydrolyzed bidentate (T2) and tridentate (T3) bonding forms. These bonding characteristics lead to spherical aggregates and tightly bonded particles. All silylated products show higher thermal stability than those of pristine LDHs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a mass-conservative vertex-centred finite volume method for efficiently solving the mixed form of Richards’ equation in heterogeneous porous media. The spatial discretisation is particularly well-suited to heterogeneous media because it produces consistent flux approximations at quadrature points where material properties are continuous. Combined with the method of lines, the spatial discretisation gives a set of differential algebraic equations amenable to solution using higher-order implicit solvers. We investigate the solution of the mixed form using a Jacobian-free inexact Newton solver, which requires the solution of an extra variable for each node in the mesh compared to the pressure-head form. By exploiting the structure of the Jacobian for the mixed form, the size of the preconditioner is reduced to that for the pressure-head form, and there is minimal computational overhead for solving the mixed form. The proposed formulation is tested on two challenging test problems. The solutions from the new formulation offer conservation of mass at least one order of magnitude more accurate than a pressure head formulation, and the higher-order temporal integration significantly improves both the mass balance and computational efficiency of the solution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Compared with viewing videos on PCs or TVs, mobile users have different experiences in viewing videos on a mobile phone due to different device features such as screen size and distinct usage contexts. To understand how mobile user’s viewing experience is impacted, we conducted a field user study with 42 participants in two typical usage contexts using a custom-designed iPhone application. With user’s acceptance of mobile video quality as the index, the study addresses four influence aspects of user experiences, including context, content type, encoding parameters and user profiles. Accompanying the quantitative method (acceptance assessment), we used a qualitative interview method to obtain a deeper understanding of a user’s assessment criteria and to support the quantitative results from a user’s perspective. Based on the results from data analysis, we advocate two user-driven strategies to adaptively provide an acceptable quality and to predict a good user experience, respectively. There are two main contributions from this paper. Firstly, the field user study allows a consideration of more influencing factors into the research on user experience of mobile video. And these influences are further demonstrated by user’s opinions. Secondly, the proposed strategies — user-driven acceptance threshold adaptation and user experience prediction — will be valuable in mobile video delivery for optimizing user experience.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Some of my most powerful spiritual experiences have come from the splendorous and sublime sounding hymns performed by a choir and church organ at the traditional Anglican church I’ve attended since I was very young. In the later stage of my life, my pursuit of education in the field of engineering caused me to move to Australia where I regularly attended a contemporary evangelical church and subsequently became a music director in the faith community. This environmental and cultural shift altered my perception and musical experiences of Christian music and led me to enquire about the relationship between Christian liturgy and church music. Throughout history church musicians and composers have synthesised the theological, congregational, cultural and musical aspects of church liturgy. Many great composers have taken into account the conditions surrounding the process of sacred composition and arrangement of music to enhance the experience of religious ecstasy – they sought resonances with Christian values and beliefs to draw congregational participation into the light of praising and glorifying God. As a music director in an evangelical church this aspiration has become one I share. I hope to identify and define the qualities of these resonances that have been successful and apply them to my own practice. Introduction and Structure of the Thesis In this study I will examine four purposively selected excerpts of Christian church vocal music combining theomusicological and semiotic analysis to help identify guidelines that might be useful in my practice as a church music director. The four musical excerpts have been selected based upon their sustained musical and theological impact over time, and their ability to affect ecstatic responses from congregations. This thesis documents a personal journey through analysis of music and uses a context that draws upon ethno-musicological, theological and semiotic tools that lead to a preliminary framework and principles which can then be applied to the identified qualities of resonance in church music today. The thesis is comprised of four parts. Part 1 presents a literature study on the relationship between sacred music, the effects of religious ecstasy and the Christian church. Multiple lenses on this phenomenon are drawn from the viewpoints of prominent western church historians, Biblical theologians, and philosophers. The literature study continues in Part 2, where the role of embodiment is examined from the current perspective of cognitive learning environments. This study offers a platform for a critical reflection on two distinctive musical liturgical systems that have treated differently the notion of embodied understanding amidst a shifting church paradigm. This allows an in-depth theological and philosophical understanding of the liturgical conditions around sacred music-making that relates to the monistic and dualistic body/mind. Part 3 involves undertaking a theomusicological methodology that utilises creative case studies of four purposively selected spiritual pieces. A semiotic study focuses on specific sections of sacred vocal works that express the notions of ‘praise’ and ‘glorification’, particularly in relation to these effects,which combine an analysis of theological perspectives around religious ecstasy and particular spiritual themes. Part 4 presents the critiques and findings gathered from the study that incorporate theoretical and technological means to analyse the purposive selected musical artefact, particularly with the sonic narratives expressing notions of ‘Praise' and 'Glory’. The musical findings are further discussed in relation to the notion of resonance, and then a conceptual framework for the role of contemporary musicdirector is proposed. The musical and Christian terminologies used in the thesis are explained in the glossary, and the appendices includes tables illustrating the musical findings, conducted surveys, written musical analyses and audio examples of selected sacred pieces available on the enclosed compact disc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Given global demand for new infrastructure, governments face substantial challenges in funding new infrastructure and simultaneously delivering Value for Money (VfM). The paper begins with an update on a key development in a new early/first-order procurement decision making model that deploys production cost/benefit theory and theories concerning transaction costs from the New Institutional Economics, in order to identify a procurement mode that is likely to deliver the best ratio of production costs and transaction costs to production benefits, and therefore deliver superior VfM relative to alternative procurement modes. In doing so, the new procurement model is also able to address the uncertainty concerning the relative merits of Public-Private Partnerships (PPP) and non-PPP procurement approaches. The main aim of the paper is to develop competition as a dependent variable/proxy for VfM and a hypothesis (overarching proposition), as well as developing a research method to test the new procurement model. Competition reflects both production costs and benefits (absolute level of competition) and transaction costs (level of realised competition) and is a key proxy for VfM. Using competition as a proxy for VfM, the overarching proposition is given as: When the actual procurement mode matches the predicted (theoretical) procurement mode (informed by the new procurement model), then actual competition is expected to match potential competition (based on actual capacity). To collect data to test this proposition, the research method that is developed in this paper combines a survey and case study approach. More specifically, data collection instruments for the surveys to collect data on actual procurement, actual competition and potential competition are outlined. Finally, plans for analysing this survey data are briefly mentioned, along with noting the planned use of analytical pattern matching in deploying the new procurement model and in order to develop the predicted (theoretical) procurement mode.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The emergence of Twenty20 cricket at the elite level has been marketed on the excitement of the big hitter, where it seems that winning is a result of the muscular batter hitting boundaries at will. This version of the game has captured the imagination of many young players who all want to score runs with “big hits”. However, in junior cricket, boundary hitting is often more difficult due to size limitations of children and games played on outfields where the ball does not travel quickly. As a result, winning is often achieved via a less spectacular route – by scoring more singles than your opponents. However, most standard coaching texts only describe how to play boundary scoring shots (e.g. the drives, pulls, cuts and sweeps) and defensive shots to protect the wicket. Learning to bat appears to have been reduced to extremes of force production, i.e. maximal force production to hit boundaries or minimal force production to stop the ball from hitting the wicket. Initially, this is not a problem because the typical innings of a young player (<12 years) would be based on the concept of “block” or “bash” – they “block” the good balls and “bash” the short balls. This approach works because there are many opportunities to hit boundaries off the numerous inaccurate deliveries of novice bowlers. Most runs are scored behind the wicket by using the pace of the bowler’s delivery to re-direct the ball, because the intrinsic dynamics (i.e. lack of strength) of most children means that they can only create sufficient power by playing shots where the whole body can contribute to force production. This method works well until the novice player comes up against more accurate bowling when they find they have no way of scoring runs. Once batters begin to face “good” bowlers, batters have to learn to score runs via singles. In cricket coaching manuals (e.g. ECB, n.d), running between the wickets is treated as a separate task to batting, and the “basics” of running, such as how to “back- up”, carry the bat, calling and turning and sliding the bat into the crease are “drilled” into players. This task decomposition strategy focussing on techniques is a common approach to skill acquisition in many highly traditional sports, typified in cricket by activities where players hit balls off tees and receive “throw-downs” from coaches. However, the relative usefulness of these approaches in the acquisition of sporting skills is increasingly being questioned (Pinder, Renshaw & Davids, 2009). We will discuss why this is the case in the next section.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Business practices vary from one company to another and business practices often need to be changed due to changes of business environments. To satisfy different business practices, enterprise systems need to be customized. To keep up with ongoing business practice changes, enterprise systems need to be adapted. Because of rigidity and complexity, the customization and adaption of enterprise systems often takes excessive time with potential failures and budget shortfall. Moreover, enterprise systems often drag business behind because they cannot be rapidly adapted to support business practice changes. Extensive literature has addressed this issue by identifying success or failure factors, implementation approaches, and project management strategies. Those efforts were aimed at learning lessons from post implementation experiences to help future projects. This research looks into this issue from a different angle. It attempts to address this issue by delivering a systematic method for developing flexible enterprise systems which can be easily tailored for different business practices or rapidly adapted when business practices change. First, this research examines the role of system models in the context of enterprise system development; and the relationship of system models with software programs in the contexts of computer aided software engineering (CASE), model driven architecture (MDA) and workflow management system (WfMS). Then, by applying the analogical reasoning method, this research initiates a concept of model driven enterprise systems. The novelty of model driven enterprise systems is that it extracts system models from software programs and makes system models able to stay independent of software programs. In the paradigm of model driven enterprise systems, system models act as instructors to guide and control the behavior of software programs. Software programs function by interpreting instructions in system models. This mechanism exposes the opportunity to tailor such a system by changing system models. To make this true, system models should be represented in a language which can be easily understood by human beings and can also be effectively interpreted by computers. In this research, various semantic representations are investigated to support model driven enterprise systems. The significance of this research is 1) the transplantation of the successful structure for flexibility in modern machines and WfMS to enterprise systems; and 2) the advancement of MDA by extending the role of system models from guiding system development to controlling system behaviors. This research contributes to the area relevant to enterprise systems from three perspectives: 1) a new paradigm of enterprise systems, in which enterprise systems consist of two essential elements: system models and software programs. These two elements are loosely coupled and can exist independently; 2) semantic representations, which can effectively represent business entities, entity relationships, business logic and information processing logic in a semantic manner. Semantic representations are the key enabling techniques of model driven enterprise systems; and 3) a brand new role of system models; traditionally the role of system models is to guide developers to write system source code. This research promotes the role of system models to control the behaviors of enterprise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, I show clear links between the theoretical underpinnings of SFL and those of specific sociological, anthropological, and communication research traditions. My purpose in doing so is to argue that SFL is an excellent interdisciplinary research method for the social sciences, especially considering the emergent form of political economy being touted by new media enthusiasts: the so called knowledge (or information) economy. To demonstrate the flexibility and salience of SFL in diverse traditions of social research, and as evidence of its ability to be deployed as a flexible research method across formerly impermeable disciplinary and social boundaries, I use analyses from my doctoral research, relating these - theoretically speaking - to specific research traditions in sociology, communication, and anthropology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes an effective method for signal-authentication and spoofing detection for civilian GNSS receivers using the GPS L1 C/A and the Galileo E1-B Safety of Life service. The paper discusses various spoofing attack profiles and how the proposed method is able to detect these attacks. This method is relatively low-cost and can be suitable for numerous mass-market applications. This paper is the subject of a pending patent.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Kernel-based learning algorithms work by embedding the data into a Euclidean space, and then searching for linear relations among the embedded data points. The embedding is performed implicitly, by specifying the inner products between each pair of points in the embedding space. This information is contained in the so-called kernel matrix, a symmetric and positive semidefinite matrix that encodes the relative positions of all points. Specifying this matrix amounts to specifying the geometry of the embedding space and inducing a notion of similarity in the input space - classical model selection problems in machine learning. In this paper we show how the kernel matrix can be learned from data via semidefinite programming (SDP) techniques. When applied to a kernel matrix associated with both training and test data this gives a powerful transductive algorithm -using the labeled part of the data one can learn an embedding also for the unlabeled part. The similarity between test points is inferred from training points and their labels. Importantly, these learning problems are convex, so we obtain a method for learning both the model class and the function without local minima. Furthermore, this approach leads directly to a convex method for learning the 2-norm soft margin parameter in support vector machines, solving an important open problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are many applications in aeronautics where there exist strong couplings between disciplines. One practical example is within the context of Unmanned Aerial Vehicle(UAV) automation where there exists strong coupling between operation constraints, aerodynamics, vehicle dynamics, mission and path planning. UAV path planning can be done either online or offline. The current state of path planning optimisation online UAVs with high performance computation is not at the same level as its ground-based offline optimizer's counterpart, this is mainly due to the volume, power and weight limitations on the UAV; some small UAVs do not have the computational power needed for some optimisation and path planning task. In this paper, we describe an optimisation method which can be applied to Multi-disciplinary Design Optimisation problems and UAV path planning problems. Hardware-based design optimisation techniques are used. The power and physical limitations of UAV, which may not be a problem in PC-based solutions, can be approached by utilizing a Field Programmable Gate Array (FPGA) as an algorithm accelerator. The inevitable latency produced by the iterative process of an Evolutionary Algorithm (EA) is concealed by exploiting the parallelism component within the dataflow paradigm of the EA on an FPGA architecture. Results compare software PC-based solutions and the hardware-based solutions for benchmark mathematical problems as well as a simple real world engineering problem. Results also indicate the practicality of the method which can be used for more complex single and multi objective coupled problems in aeronautical applications.