933 resultados para Set of Weak Stationary Dynamic Actions
Resumo:
Integrated interpretation of multi-beam bathymetric, sediment-penetrating acoustic (PARASOUND) and seismic data show a multiple slope failure on the northern European continental margin, north of Spitsbergen. The first slide event occurred during MIS 3 around 30 cal. ka BP and was characterised by highly dynamic and rapid evacuation of ca. 1250 km**3 of sediment from the lower to the upper part of the continental slope. During this event, headwalls up to 1600 m high were created and ca. 1150 km**3 material from hemi-pelagic sediments and from the lower pre-existing trough mouth fan has been entrained and transported into the semi-enclosed Sophia Basin. This megaslide event was followed by a secondary evacuation of material to the Nansen Basin by funnelling of the debris through the channel between Polarstern Seamount and the adjacent continental slope. The main slide debris is overlain by a set of fining-upward sequences as evidence for the associated suspension cloud and following minor failure events. Subsequent adjustment of the eastern headwalls led to failure of rather soft sediments and creation of smaller debris flows that followed the main slide surficial topography. Discharge of the Hinlopen ice stream during the Last Glacial Maximum and the following deglaciation draped the central headwalls and created a fan deposit of glacigenic debris flows.
Resumo:
This paper presents a novel method for determining the temperature of a radiating body. The experimental method requires only very common instrumentation. It is based on the measurement of the stationary temperature of an object placed at different distances from the body and on the application of the energy balance equation in a stationary state. The method allows one to obtain the temperature of an inaccessible radiating body when radiation measurements are not available. The method has been applied to the determination of the filament temperature of incandescent lamps of different powers.
Resumo:
Recent research into resting-state functional magnetic resonance imaging (fMRI) has shown that the brain is very active during rest. This thesis work utilizes blood oxygenation level dependent (BOLD) signals to investigate the spatial and temporal functional network information found within resting-state data, and aims to investigate the feasibility of extracting functional connectivity networks using different methods as well as the dynamic variability within some of the methods. Furthermore, this work looks into producing valid networks using a sparsely-sampled sub-set of the original data.
In this work we utilize four main methods: independent component analysis (ICA), principal component analysis (PCA), correlation, and a point-processing technique. Each method comes with unique assumptions, as well as strengths and limitations into exploring how the resting state components interact in space and time.
Correlation is perhaps the simplest technique. Using this technique, resting-state patterns can be identified based on how similar the time profile is to a seed region’s time profile. However, this method requires a seed region and can only identify one resting state network at a time. This simple correlation technique is able to reproduce the resting state network using subject data from one subject’s scan session as well as with 16 subjects.
Independent component analysis, the second technique, has established software programs that can be used to implement this technique. ICA can extract multiple components from a data set in a single analysis. The disadvantage is that the resting state networks it produces are all independent of each other, making the assumption that the spatial pattern of functional connectivity is the same across all the time points. ICA is successfully able to reproduce resting state connectivity patterns for both one subject and a 16 subject concatenated data set.
Using principal component analysis, the dimensionality of the data is compressed to find the directions in which the variance of the data is most significant. This method utilizes the same basic matrix math as ICA with a few important differences that will be outlined later in this text. Using this method, sometimes different functional connectivity patterns are identifiable but with a large amount of noise and variability.
To begin to investigate the dynamics of the functional connectivity, the correlation technique is used to compare the first and second halves of a scan session. Minor differences are discernable between the correlation results of the scan session halves. Further, a sliding window technique is implemented to study the correlation coefficients through different sizes of correlation windows throughout time. From this technique it is apparent that the correlation level with the seed region is not static throughout the scan length.
The last method introduced, a point processing method, is one of the more novel techniques because it does not require analysis of the continuous time points. Here, network information is extracted based on brief occurrences of high or low amplitude signals within a seed region. Because point processing utilizes less time points from the data, the statistical power of the results is lower. There are also larger variations in DMN patterns between subjects. In addition to boosted computational efficiency, the benefit of using a point-process method is that the patterns produced for different seed regions do not have to be independent of one another.
This work compares four unique methods of identifying functional connectivity patterns. ICA is a technique that is currently used by many scientists studying functional connectivity patterns. The PCA technique is not optimal for the level of noise and the distribution of the data sets. The correlation technique is simple and obtains good results, however a seed region is needed and the method assumes that the DMN regions is correlated throughout the entire scan. Looking at the more dynamic aspects of correlation changing patterns of correlation were evident. The last point-processing method produces a promising results of identifying functional connectivity networks using only low and high amplitude BOLD signals.
Resumo:
The focus of this work is to develop and employ numerical methods that provide characterization of granular microstructures, dynamic fragmentation of brittle materials, and dynamic fracture of three-dimensional bodies.
We first propose the fabric tensor formalism to describe the structure and evolution of lithium-ion electrode microstructure during the calendaring process. Fabric tensors are directional measures of particulate assemblies based on inter-particle connectivity, relating to the structural and transport properties of the electrode. Applying this technique to X-ray computed tomography of cathode microstructure, we show that fabric tensors capture the evolution of the inter-particle contact distribution and are therefore good measures for the internal state of and electronic transport within the electrode.
We then shift focus to the development and analysis of fracture models within finite element simulations. A difficult problem to characterize in the realm of fracture modeling is that of fragmentation, wherein brittle materials subjected to a uniform tensile loading break apart into a large number of smaller pieces. We explore the effect of numerical precision in the results of dynamic fragmentation simulations using the cohesive element approach on a one-dimensional domain. By introducing random and non-random field variations, we discern that round-off error plays a significant role in establishing a mesh-convergent solution for uniform fragmentation problems. Further, by using differing magnitudes of randomized material properties and mesh discretizations, we find that employing randomness can improve convergence behavior and provide a computational savings.
The Thick Level-Set model is implemented to describe brittle media undergoing dynamic fragmentation as an alternative to the cohesive element approach. This non-local damage model features a level-set function that defines the extent and severity of degradation and uses a length scale to limit the damage gradient. In terms of energy dissipated by fracture and mean fragment size, we find that the proposed model reproduces the rate-dependent observations of analytical approaches, cohesive element simulations, and experimental studies.
Lastly, the Thick Level-Set model is implemented in three dimensions to describe the dynamic failure of brittle media, such as the active material particles in the battery cathode during manufacturing. The proposed model matches expected behavior from physical experiments, analytical approaches, and numerical models, and mesh convergence is established. We find that the use of an asymmetrical damage model to represent tensile damage is important to producing the expected results for brittle fracture problems.
The impact of this work is that designers of lithium-ion battery components can employ the numerical methods presented herein to analyze the evolving electrode microstructure during manufacturing, operational, and extraordinary loadings. This allows for enhanced designs and manufacturing methods that advance the state of battery technology. Further, these numerical tools have applicability in a broad range of fields, from geotechnical analysis to ice-sheet modeling to armor design to hydraulic fracturing.
Resumo:
This report documents the development of the initial dynamic policy mixes that were developed for assessment in the DYNAMIX project. The policy mixes were designed within three different policy areas: overarching policy, land-use and food, and metals and other materials. The policy areas were selected to address absolute decoupling in general and, specifically, the DYNAMIX targets related to the use of virgin metals, the use of arable land and freshwater, the input of the nutrients nitrogen and phosphorus, and emissions of greenhouse gases. Each policy mix was developed within a separate author team, using a common methodological framework that utilize previous findings in the project. Specific drivers and barriers for resource use and resource efficiency are discussed in each policy area. Specific policy objectives and targets are also discussed before the actual policy mix is presented. Each policy mix includes a set of key instruments, which can be embedded in a wider set of supporting and complementary policy instruments. All key instruments are described in the report through responses to a set of predefined questions. The overarching mix includes a broad variety of key instruments. The land-use policy mix emphasizes five instruments to improve food production through, for example, revisions of already existing policy documents. It also includes three instruments to influence the food consumption and food waste. The policy mix on metals and other materials primarily aims at reducing the use of virgin metals through increased recycling, increased material efficiency and environmentally justified material substitution. To avoid simply shifting of burdens, it includes several instruments of an overarching character.
Resumo:
Concept maps are a technique used to obtain a visual representation of a person's ideas about a concept or a set of related concepts. Specifically, in this paper, through a qualitative methodology, we analyze the concept maps proposed by 52 groups of teacher training students in order to find out the characteristics of the maps and the degree of adequacy of the contents with regard to the teaching of human nutrition in the 3rd cycle of primary education. The participants were enrolled in the Teacher Training Degree majoring in Primary Education, and the data collection was carried out through a training activity under the theme of what to teach about Science in Primary School? The results show that the maps are a useful tool for working in teacher education as they allow organizing, synthesizing, and communicating what students know. Moreover, through this work, it has been possible to see that future teachers have acceptable skills for representing the concepts/ideas in a concept map, although the level of adequacy of concepts/ideas about human nutrition and its relations is usually medium or low. These results are a wake-up call for teacher training, both initial and ongoing, because they shows the inability to change priorities as far as the selection of content is concerned.
Resumo:
This paper examines what types of actions undertaken by patent holders have been considered as abusive in the framework of French and Belgian patent litigation. Particular attention is given to the principle of the prohibition of “abuse of rights” (AoR). In the jurisdictions under scrutiny, the principle of AoR is essentially a jurisprudential construction in cases where judges faced a particular set of circumstances for which no codified rules were available. To investigate how judges deal with the prohibition of AoR in patent litigation and taking into account the jurisprudential nature of the principle, an in-depth and comparative case law analysis has been conducted. Although the number of cases in which patent holders have been sanctioned for such abuses is not overabundant, they do provide sufficient leads on what is understood by Belgian and French courts to constitute an abuse of patent rights. From this comparative analysis, useful lessons can be learned for the interpretation of the ambiguous notion of ‘abuse’ from a broader perspective.
Resumo:
Academic literature has increasingly recognized the value of non-traditional higher education learning environments that emphasize action-orientated experiential learning for the study of entrepreneurship (Gibb, 2002; Jones & English, 2004). Many entrepreneurship educators have accordingly adopted approaches based on Kolb’s (1984) experiential learning cycle to develop a dynamic, holistic model of an experience-based learning process. Jones and Iredale (2010) suggested that entrepreneurship education requires experiential learning styles and creative problem solving to effectively engage students. Support has also been expressed for learning-by-doing activities in group or network contexts (Rasmussen and Sorheim, 2006), and for student-led approaches (Fiet, 2001). This study will build on previous works by exploring the use of experiential learning in an applied setting to develop entrepreneurial attitudes and traits in students. Based on the above literature, a British higher education institution (HEI) implemented a new, entrepreneurially-focused curriculum during the 2013/14 academic year designed to support and develop students’ entrepreneurial attitudes and intentions. The approach actively involved students in small scale entrepreneurship activities by providing scaffolded opportunities for students to design and enact their own entrepreneurial concepts. Students were provided with the necessary resources and training to run small entrepreneurial ventures in three different working environments. During the course of the year, three applied entrepreneurial opportunities were provided for students, increasing in complexity, length, and profitability as the year progressed. For the first undertaking, the class was divided into small groups, and each group was given a time slot and venue to run a pop-up shop in a busy commercial shopping centre. Each group of students was supported by lectures and dedicated class time for group work, while receiving a set of objectives and recommended resources. For the second venture, groups of students were given the opportunity to utilize an on-campus bar/club for an evening and were asked to organize and run a profitable event, acting as an outside promoter. Students were supported with lectures and seminars, and groups were given a £250 budget to develop, plan, and market their unique event. The final event was optional and required initiative on the part of the students. Students were given the opportunity to develop and put forward business plans to be judged by the HEI and the supporting organizations, which selected the winning plan. The authors of the winning business plan received a £2000 budget and a six-week lease to a commercial retail unit within a shopping centre to run their business. Students received additional academic support upon request from the instructor, and one of the supporting organizations provided a training course offering advice on creating a budget and a business plan. Data from students taking part in each of the events was collected, in order to ascertain the learning benefits of the experiential learning, along with the successes and difficulties they faced. These responses have been collected and analyzed and will be presented at the conference along with the instructor’s conclusions and recommendations for the use of such programs in higher educations.
Resumo:
This thesis evaluates the rheological behaviour of asphalt mixtures and the corresponding extracted binders from the mixtures containing different amounts of Reclaimed Asphalt (RA). Generally, the use of RA is limited to certain amounts. The study materials are Stone Mastic Asphalts including a control sample with 0% RA, and other samples with RA rates of 30%, 60% and 100%. Another set of studied mixtures are Asphalt Concretes (AC) types with again a control mix having 0% RA rate and the other mixtures designs containing 30%, 60% and 90% of reclaimed asphalt which also contain additives. In addition to the bitumen samples extracted from asphalt mixes, there are bitumen samples directly extracted from the original RA. To characterize the viscoelastic behaviour of the binders, Dynamic Shear Rheometer (DSR) tests were conducted on bitumen specimens. The resulting influence of the RA content in the bituminous binders are illustrated through master curves, black diagrams and Cole-Cole plots with regressing these experimental data by the application of the analogical 2S2P1D and the analytical CA model. The advantage of the CA model is in its limited number of parameters and thus is a simple model to use. The 2S2P1D model is an analogical rheological model for the prediction of the linear viscoelastic properties of both asphalt binders and mixtures. In order to study the influence of RA on mixtures, the Indirect Tensile Test (ITT) has been conducted. The master curves of different mixture samples are evaluated by regressing the test data points to a sigmoidal function and subsequently by comparing the master curves, the influence of RA materials is studied. The thesis also focusses on the applicability and also differences of CA model and 2S2P1D model for bitumen samples and the sigmoid function for the mixtures and presents the influence of the RA rate on the investigated model parameters.
Resumo:
Coupled map lattices (CML) can describe many relaxation and optimization algorithms currently used in image processing. We recently introduced the ‘‘plastic‐CML’’ as a paradigm to extract (segment) objects in an image. Here, the image is applied by a set of forces to a metal sheet which is allowed to undergo plastic deformation parallel to the applied forces. In this paper we present an analysis of our ‘‘plastic‐CML’’ in one and two dimensions, deriving the nature and stability of its stationary solutions. We also detail how to use the CML in image processing, how to set the system parameters and present examples of it at work. We conclude that the plastic‐CML is able to segment images with large amounts of noise and large dynamic range of pixel values, and is suitable for a very large scale integration(VLSI) implementation.
Resumo:
A fully coupled non-linear effective stress response finite difference (FD) model is built to survey the counter-intuitive recent findings on the reliance of pore water pressure ratio on foundation contact pressure. Two alternative design scenarios for a benchmark problem are explored and contrasted in the light of construction emission rates using the EFFC-DFI methodology. A strain-hardening effective stress plasticity model is adopted to simulate the dynamic loading. A combination of input motions, contact pressure, initial vertical total pressure and distance to foundation centreline are employed, as model variables, to further investigate the control of permanent and variable actions on the residual pore pressure ratio. The model is verified against the Ghosh and Madabhushi high acceleration field test database. The outputs of this work is aimed to improve the current computer-aided seismic foundation design that relies on ground’s packing state and consistency. The results confirm that on seismic excitation of shallow foundations, the likelihood of effective stress loss is greater in deeper depths and across free field. For the benchmark problem, adopting a shallow foundation system instead of piled foundation benefitted in a 75% less emission rate, a marked proportion of which is owed to reduced materials and haulage carbon cost.
Resumo:
Deployment of low power basestations within cellular networks can potentially increase both capacity and coverage. However, such deployments require efficient resource allocation schemes for managing interference from the low power and macro basestations that are located within each other’s transmission range. In this dissertation, we propose novel and efficient dynamic resource allocation algorithms in the frequency, time and space domains. We show that the proposed algorithms perform better than the current state-of-art resource management algorithms. In the first part of the dissertation, we propose an interference management solution in the frequency domain. We introduce a distributed frequency allocation scheme that shares frequencies between macro and low power pico basestations, and guarantees a minimum average throughput to users. The scheme seeks to minimize the total number of frequencies needed to honor the minimum throughput requirements. We evaluate our scheme using detailed simulations and show that it performs on par with the centralized optimum allocation. Moreover, our proposed scheme outperforms a static frequency reuse scheme and the centralized optimal partitioning between the macro and picos. In the second part of the dissertation, we propose a time domain solution to the interference problem. We consider the problem of maximizing the alpha-fairness utility over heterogeneous wireless networks (HetNets) by jointly optimizing user association, wherein each user is associated to any one transmission point (TP) in the network, and activation fractions of all TPs. Activation fraction of a TP is the fraction of the frame duration for which it is active, and together these fractions influence the interference seen in the network. To address this joint optimization problem which we show is NP-hard, we propose an alternating optimization based approach wherein the activation fractions and the user association are optimized in an alternating manner. The subproblem of determining the optimal activation fractions is solved using a provably convergent auxiliary function method. On the other hand, the subproblem of determining the user association is solved via a simple combinatorial algorithm. Meaningful performance guarantees are derived in either case. Simulation results over a practical HetNet topology reveal the superior performance of the proposed algorithms and underscore the significant benefits of the joint optimization. In the final part of the dissertation, we propose a space domain solution to the interference problem. We consider the problem of maximizing system utility by optimizing over the set of user and TP pairs in each subframe, where each user can be served by multiple TPs. To address this optimization problem which is NP-hard, we propose a solution scheme based on difference of submodular function optimization approach. We evaluate our scheme using detailed simulations and show that it performs on par with a much more computationally demanding difference of convex function optimization scheme. Moreover, the proposed scheme performs within a reasonable percentage of the optimal solution. We further demonstrate the advantage of the proposed scheme by studying its performance with variation in different network topology parameters.
Resumo:
In today's fast-paced and interconnected digital world, the data generated by an increasing number of applications is being modeled as dynamic graphs. The graph structure encodes relationships among data items, while the structural changes to the graphs as well as the continuous stream of information produced by the entities in these graphs make them dynamic in nature. Examples include social networks where users post status updates, images, videos, etc.; phone call networks where nodes may send text messages or place phone calls; road traffic networks where the traffic behavior of the road segments changes constantly, and so on. There is a tremendous value in storing, managing, and analyzing such dynamic graphs and deriving meaningful insights in real-time. However, a majority of the work in graph analytics assumes a static setting, and there is a lack of systematic study of the various dynamic scenarios, the complexity they impose on the analysis tasks, and the challenges in building efficient systems that can support such tasks at a large scale. In this dissertation, I design a unified streaming graph data management framework, and develop prototype systems to support increasingly complex tasks on dynamic graphs. In the first part, I focus on the management and querying of distributed graph data. I develop a hybrid replication policy that monitors the read-write frequencies of the nodes to decide dynamically what data to replicate, and whether to do eager or lazy replication in order to minimize network communication and support low-latency querying. In the second part, I study parallel execution of continuous neighborhood-driven aggregates, where each node aggregates the information generated in its neighborhoods. I build my system around the notion of an aggregation overlay graph, a pre-compiled data structure that enables sharing of partial aggregates across different queries, and also allows partial pre-computation of the aggregates to minimize the query latencies and increase throughput. Finally, I extend the framework to support continuous detection and analysis of activity-based subgraphs, where subgraphs could be specified using both graph structure as well as activity conditions on the nodes. The query specification tasks in my system are expressed using a set of active structural primitives, which allows the query evaluator to use a set of novel optimization techniques, thereby achieving high throughput. Overall, in this dissertation, I define and investigate a set of novel tasks on dynamic graphs, design scalable optimization techniques, build prototype systems, and show the effectiveness of the proposed techniques through extensive evaluation using large-scale real and synthetic datasets.
Resumo:
Authentication plays an important role in how we interact with computers, mobile devices, the web, etc. The idea of authentication is to uniquely identify a user before granting access to system privileges. For example, in recent years more corporate information and applications have been accessible via the Internet and Intranet. Many employees are working from remote locations and need access to secure corporate files. During this time, it is possible for malicious or unauthorized users to gain access to the system. For this reason, it is logical to have some mechanism in place to detect whether the logged-in user is the same user in control of the user's session. Therefore, highly secure authentication methods must be used. We posit that each of us is unique in our use of computer systems. It is this uniqueness that is leveraged to "continuously authenticate users" while they use web software. To monitor user behavior, n-gram models are used to capture user interactions with web-based software. This statistical language model essentially captures sequences and sub-sequences of user actions, their orderings, and temporal relationships that make them unique by providing a model of how each user typically behaves. Users are then continuously monitored during software operations. Large deviations from "normal behavior" can possibly indicate malicious or unintended behavior. This approach is implemented in a system called Intruder Detector (ID) that models user actions as embodied in web logs generated in response to a user's actions. User identification through web logs is cost-effective and non-intrusive. We perform experiments on a large fielded system with web logs of approximately 4000 users. For these experiments, we use two classification techniques; binary and multi-class classification. We evaluate model-specific differences of user behavior based on coarse-grain (i.e., role) and fine-grain (i.e., individual) analysis. A specific set of metrics are used to provide valuable insight into how each model performs. Intruder Detector achieves accurate results when identifying legitimate users and user types. This tool is also able to detect outliers in role-based user behavior with optimal performance. In addition to web applications, this continuous monitoring technique can be used with other user-based systems such as mobile devices and the analysis of network traffic.
Resumo:
Value and reasons for action are often cited by rationalists and moral realists as providing a desire-independent foundation for normativity. Those maintaining instead that normativity is dependent upon motivation often deny that anything called '"value" or "reasons" exists. According to the interest-relational theory, something has value relative to some perspective of desire just in case it satisfies those desires, and a consideration is a reason for some action just in case it indicates that something of value will be accomplished by that action. Value judgements therefore describe real properties of objects and actions, but have no normative significance independent of desires. It is argued that only the interest-relational theory can account for the practical significance of value and reasons for action. Against the Kantian hypothesis of prescriptive rational norms, I attack the alleged instrumental norm or hypothetical imperative, showing that the normative force for taking the means to our ends is explicable in terms of our desire for the end, and not as a command of reason. This analysis also provides a solution to the puzzle concerning the connection between value judgement and motivation. While it is possible to hold value judgements without motivation, the connection is more than accidental. This is because value judgements are usually but not always made from the perspective of desires that actually motivate the speaker. In the normal case judgement entails motivation. But often we conversationally borrow external perspectives of desire, and subsequent judgements do not entail motivation. This analysis drives a critique of a common practice as a misuse of normative language. The "absolutist" attempts to use and, as philosopher, analyze normative language in such a way as to justify the imposition of certain interests over others. But these uses and analyses are incoherent - in denying relativity to particular desires they conflict with the actual meaning of these utterances, which is always indexed to some particular set of desires.