833 resultados para Many-To-One Matching Market


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Safety has long been a problem in the construction industry. Repair, maintenance, alteration and addition (RMAA) sector has emerged to play an important role in the construction industry. It accounted for 53% of the total construction market in Hong Kong in 2007. Safety performance of the RMAA words has been alarming. Statistics indicate that the percentage of fatal industrial accidents arising from RMAA work in Hong Kong was over 56% in 2006 while the remaining 44% was from new works. Effective safety measures to address the safety problems and improve safety performance of the RMAA sector are urgently needed. Unsafe behaviour has been attributed to one of the major causes of accidents. Traditional cost-benefit analysis of workers' safety behaviour seems to be inadequate. This paper proposes to adopt a game theoretical approach to analyse safety behaviour of RMAA workers. Game theory is concerned with the decision-making process in situations where outcomes depend upon choices made by one or more players. A game theoretical model between contractor and worker has been proffered. Mathematical analysis of this game model has been done and implications of the analysis have been discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human saliva mirrors body’s health and well-being and many of the biomolecules present in blood or urine can also be found in salivary secretions. However, biomolecular concentrations in saliva are usually one tenth to one thousandth of the levels in blood (Pfaffe et al., 2011). Sensitive detection technology platforms are therefore required to detect biomolecules in saliva. Another road block to the advancement of salivary diagnostics is the lack of information related to healthy state saliva vs. a diseased saliva, baseline levels and reference ranges and diurnal variations. Saliva has numerous advantages over blood or urine as a diagnostic fluid: (a) the non-invasive nature of sample collection and the simple, safe, painless and cost-effective methods to collect it; (b) unskilled personnel can collect saliva samples at multiple time points; and (c) the total protein concentration is approximately a quarter of that is present in plasma, which makes it easier to investigate low abundance proteins (Pfaffe et al., 2011). Currently, saliva assays are routinely used to determine, diseases such as HIV, drugs and substances of abuse to provide information on exposure and give qualitative information on the type of illicit drug used (Kintz et al., 2009), cortisol levels for diagnosing Cushing’s syndrome (Doi et al., 2008), and use for biomonitoring of exposure to chemicals (Caporossi et al., 2010) to measure hormones (Gröschl, 2009)....

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In fisheries managed using individual transferable quotas (ITQs) it is generally assumed that quota markets are well-functioning, allowing quota to flow on either a temporary or permanent basis to those able to make best use of it. However, despite an increasing number of fisheries being managed under ITQs, empirical assessments of the quota markets that have actually evolved in these fisheries remain scarce. The Queensland Coral Reef Fin-Fish Fishery (CRFFF) on the Great Barrier Reef has been managed under a system of ITQs since 2004. Data on individual quota holdings and trades for the period 2004-2012 were used to assess the CRFFF quota market and its evolution through time. Network analysis was applied to assess market structure and the nature of lease-trading relationships. An assessment of market participants’ abilities to balance their quota accounts, i.e., gap analysis, provided insights into market functionality and how this may have changed in the period observed. Trends in ownership and trade were determined, and market participants were identified as belonging to one out of a set of seven generalized types. The emergence of groups such as investors and lease-dependent fishers is clear. In 2011-2012, 41% of coral trout quota was owned by participants that did not fish it, and 64% of total coral trout landings were made by fishers that owned only 10% of the quota. Quota brokers emerged whose influence on the market varied with the bioeconomic conditions of the fishery. Throughout the study period some quota was found to remain inactive, implying potential market inefficiencies. Contribution to this inactivity appeared asymmetrical, with most residing in the hands of smaller quota holders. The importance of transaction costs in the operation of the quota market and the inequalities that may result are discussed in light of these findings

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Urban planning policies in Australia presuppose apartments as the new dominant housing type, but much of what the market has delivered is criticised as over-development, and as being generic, poorly-designed, environmentally unsustainable and unaffordable. Policy responses to this problem typically focus on planning regulation and construction costs as the primary issues needing to be addressed in order to increase the supply of quality, affordable apartment housing. In contrast, this paper uses Ball’s (1983) ‘structures of provision’ approach to outline the key processes informing apartment development and identifies a substantial gap in critical understanding of how apartments are developed in Australia. This reveals economic problems not typically considered by policymakers. Using mainstream economic analysis to review the market itself, the authors found high search costs, demand risk, problems with exchange, and lack of competition present key barriers to achieving greater affordability and limit the extent to which ‘speculative’ developers can respond to the preferences of would be owner-occupiers of apartments. The existing development model, which is reliant on capturing uplift in site value, suits investors seeking rental yields in the first instance and capital gains in the second instance, and actively encourages housing price inflation. This is exacerbated by lack of density restrictions, such as have existed in inner Melbourne for many years, which permits greater yields on redevelopment sites. The price of land in the vicinity of such redevelopment sites is pushed up as landholders' expectation of future yield is raised. All too frequently existing redevelopment sites go back onto the market as vendors seek to capture the uplift in site value and exit the project in a risk free manner...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conventional thinkin g holds that increased energy consumption is a prerequisite for economic and social development. This belief, together With the prospect of dwindling global petroleum supplies and the high costs of expanding energy supply generally, lead many to believe that it is not feasible to improve living standards substantially in the developing countries. But by shifting to high-quality energy carriers and by exploiting cost-effective opportunities for more efficient energy use, it would be possible to satisfy basic human needs and to provide considerable further improvements in living standards without significantly increasing per-capita energy use above the present level.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Eggplant was identified as another fruit fly host commodity where recent changes to interstate market access requirements are causing problems for industry. The proposed research aims to develop a systems approach to meet interstate market access requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-Technical Summary Seafood CRC Project 2009/774. Harvest strategy evaluations and co-management for the Moreton Bay Trawl Fishery Principal Investigator: Dr Tony Courtney, Principal Fisheries Biologist Fisheries and Aquaculture, Agri-Science Queensland Department of Agriculture, Fisheries and Forestry Level B1, Ecosciences Precinct, Joe Baker St, Dutton Park, Queensland 4102 Email: tony.courtney@daff.qld.gov.au Project objectives: 1. Review the literature and data (i.e., economic, biological and logbook) relevant to the Moreton Bay trawl fishery. 2. Identify and prioritise management objectives for the Moreton Bay trawl fishery, as identified by the trawl fishers. 3. Undertake an economic analysis of Moreton Bay trawl fishery. 4. Quantify long-term changes to fishing power for the Moreton Bay trawl fishery. 5. Assess priority harvest strategies identified in 2 (above). Present results to, and discuss results with, Moreton Bay Seafood Industry Association (MBSIA), fishers and Fisheries Queensland. Note: Additional, specific objectives for 2 (above) were developed by fishers and the MBSIA after commencement of the project. These are presented in detail in section 5 (below). The project was an initiative of the MBSIA, primarily in response to falling profitability in the Moreton Bay prawn trawl fishery. The analyses were undertaken by a consortium of DAFF, CSIRO and University of Queensland researchers. This report adopted the Australian Standard Fish Names (http://www.fishnames.com.au/). Trends in catch and effort The Moreton Bay otter trawl fishery is a multispecies fishery, with the majority of the catch composed of Greasyback Prawns (Metapenaeus bennettae), Brown Tiger Prawns (Penaeus esculentus), Eastern King Prawns (Melicertus plebejus), squid (Uroteuthis spp., Sepioteuthis spp.), Banana Prawns (Fenneropenaeus merguiensis), Endeavour Prawns (Metapenaeus ensis, Metapenaeus endeavouri) and Moreton Bay bugs (Thenus parindicus). Other commercially important byproduct includes blue swimmer crabs (Portunus armatus), three-spot crabs (Portunus sanguinolentus), cuttlefish (Sepia spp.) and mantis shrimp (Oratosquilla spp.). Logbook catch and effort data show that total annual reported catch of prawns from the Moreton Bay otter trawl fishery has declined to 315 t in 2008 from a maximum of 901 t in 1990. The number of active licensed vessels participating in the fishery has also declined from 207 in 1991 to 57 in 2010. Similarly, fishing effort has fallen from a peak of 13,312 boat-days in 1999 to 3817 boat-days in 2008 – a 71% reduction. The declines in catch and effort are largely attributed to reduced profitability in the fishery due to increased operational costs and depressed prawn prices. The low prawn prices appear to be attributed to Australian aquacultured prawns and imported aquacultured vannamei prawns, displacing the markets for trawl-caught prawns, especially small species such as Greasyback Prawns which traditionally dominated landings in Moreton Bay. In recent years, the relatively high Australian dollar has resulted in reduced exports of Australian wild-caught prawns. This has increased supply on the domestic market which has also suppressed price increases. Since 2002, Brown Tiger Prawns have dominated annual reported landings in the Moreton Bay fishery. While total catch and effort in the bay have declined to historically low levels, the annual catch and catch rates of Brown Tiger Prawns have been at record highs in recent years. This appears to be at least partially attributed to the tiger prawn stock having recovered from excessive effort in previous decades. The total annual value of the Moreton Bay trawl fishery catch, including byproduct, is about $5 million, of which Brown Tiger Prawns account for about $2 million. Eastern King Prawns make up about 10% of the catch and are mainly caught in the bay from October to December as they migrate to offshore waters outside the bay where they contribute to a large mono-specific trawl fishery. Some of the Eastern King Prawns harvested in Moreton Bay may be growth overfished (i.e., caught below the size required to maximise yield or value), although the optimum size-at-capture was not determined in this study. Banana Prawns typically make up about 5% of the catch, but can exceed 20%, particularly following heavy rainfall. Economic analysis of the fishery From the economic survey, cash profits were, on average, positive for both fleet segments in both years of the survey. However, after the opportunity cost of capital and depreciation were taken into account, the residual owner-operator income was relatively low, and substantially lower than the average share of revenue paid to employed skippers. Consequently, owner-operators were earning less than their opportunity cost of their labour, suggesting that the fleets were economically unviable in the longer term. The M2 licensed fleet were, on average, earning similar boat cash profits as the T1/M1 fleet, although after the higher capital costs were accounted for the T1/M1 boats were earning substantially lower returns to owner-operator labour. The mean technical efficiency for the fleet as a whole was estimated to be 0.67. That is, on average, the boats were only catching 67 per cent of what was possible given their level of inputs (hours fished and hull units). Almost one-quarter of observations had efficiency scores above 0.8, suggesting a substantial proportion of the fleet are relatively efficient, but some are also relatively inefficient. Both fleets had similar efficiency distributions, with median technical efficiency score of 0.71 and 0.67 for the M2 and T1/M1 boats respectively. These scores are reasonably consistent with other studies of prawn trawl fleets in Australia, although higher average efficiency scores were found in the NSW prawn trawl fleet. From the inefficiency model, several factors were found to significantly influence vessel efficiency. These included the number of years of experience as skipper, the number of generations that the skipper’s family had been fishing and the number of years schooling. Skippers with more schooling were significantly more efficient than skippers with lower levels of schooling, consistent with other studies. Skippers who had been fishing longer were, in fact, less efficient than newer skippers. However, this was mitigated in the case of skippers whose family had been involved in fishing for several generations, consistent with other studies and suggesting that skill was passed through by families over successive generations. Both the linear and log-linear regression models of total fishing effort against the marginal profit per hour performed reasonably well, explaining between 70 and 84 per cent of the variation in fishing effort. As the models had different dependent variables (one logged and the other not logged) this is not a good basis for model choice. A better comparator is the square root of the mean square error (SMSE) expressed as a percentage of the mean total effort. On this criterion, both models performed very similarly. The linear model suggests that each additional dollar of average profits per hour in the fishery increases total effort by around 26 hours each month. From the log linear model, each percentage increase in profits per hour increases total fishing effort by 0.13 per cent. Both models indicate that economic performance is a key driver of fishing effort in the fishery. The effect of removing the boat-replacement policy is to increase individual vessel profitability, catch and effort, but the overall increase in catch is less than that removed by the boats that must exit the fishery. That is, the smaller fleet (in terms of boat numbers) is more profitable but the overall catch is not expected to be greater than before. This assumes, however, that active boats are removed, and that these were also taking an average level of catch. If inactive boats are removed, then catch of the remaining group as a whole could increase by between 14 and 17 per cent depending on the degree to which costs are reduced with the new boats. This is still substantially lower than historical levels of catch by the fleet. Fishing power analyses An analysis of logbook data from 1988 to 2010, and survey information on fishing gear, was performed to estimate the long-term variation in the fleet’s ability to catch prawns (known as fishing power) and to derive abundance estimates of the three most commercially important prawn species (i.e., Brown Tiger, Eastern King and Greasyback Prawns). Generalised linear models were used to explain the variation in catch as a function of effort (i.e., hours fished per day), vessel and gear characteristics, onboard technologies, population abundance and environmental factors. This analysis estimated that fishing power associated with Brown Tiger and Eastern King Prawns increased over the past 20 years by 10–30% and declined by approximately 10% for greasybacks. The density of tiger prawns was estimated to have almost tripled from around 0.5 kg per hectare in 1988 to 1.5 kg/ha in 2010. The density of Eastern King Prawns was estimated to have fluctuated between 1 and 2 kg per hectare over this time period, without any noticeable overall trend, while Greasyback Prawn densities were estimated to have fluctuated between 2 and 6 kg per hectare, also without any distinctive trend. A model of tiger prawn catches was developed to evaluate the impact of fishing on prawn survival rates in Moreton Bay. The model was fitted to logbook data using the maximum-likelihood method to provide estimates of the natural mortality rate (0.038 and 0.062 per week) and catchability (which can be defined as the proportion of the fished population that is removed by one unit of effort, in this case, estimated to be 2.5 ± 0.4 E-04 per boat-day). This approach provided a method for industry and scientists to develop together a realistic model of the dynamics of the fishery. Several aspects need to be developed further to make this model acceptable to industry. Firstly, there is considerable evidence to suggest that temperature influences prawn catchability. This ecological effect should be incorporated before developing meaningful harvest strategies. Secondly, total effort has to be allocated between each species. Such allocation of effort could be included in the model by estimating several catchability coefficients. Nevertheless, the work presented in this report is a stepping stone towards estimating essential fishery parameters and developing representative mathematical models required to evaluate harvest strategies. Developing a method that allowed an effective discussion between industry, management and scientists took longer than anticipated. As a result, harvest strategy evaluations were preliminary and only included the most valuable species in the fishery, Brown Tiger Prawns. Additional analyses and data collection, including information on catch composition from field sampling, migration rates and recruitment, would improve the modelling. Harvest strategy evaluations As the harvest strategy evaluations are preliminary, the following results should not be adopted for management purposes until more thorough evaluations are performed. The effects, of closing the fishery for one calendar month, on the annual catch and value of Brown Tiger Prawns were investigated. Each of the 12 months (i.e., January to December) was evaluated. The results were compared against historical records to determine the magnitude of gain or loss associated with the closure. Uncertainty regarding the trawl selectivity was addressed using two selectivity curves, one with a weight at 50% selection (S50%) of 7 g, based on research data, and a second with S50% of 14 g, put forward by industry. In both cases, it was concluded that any monthly closure after February would not be beneficial to the industry. The magnitude of the benefit of closing the fishery in either January or February was sensitive to which mesh selectivity curve that was assumed, with greater benefit achieved when the smaller selectivity curve (i.e., S50% = 7 g) was assumed. Using the smaller selectivity (S50% = 7 g), the expected increase in catch value was 10–20% which equates to $200,000 to $400,000 annually, while the larger selectivity curve (S50% = 14 g) suggested catch value would be improved by 5–10%, or $100,000 to $200,000. The harvest strategy evaluations showed that greater benefits, in the order of 30–60% increases in the tiger annual catch value, could have been obtained by closing the fishery early in the year when annual effort levels were high (i.e., > 10,000 boat-days). In recent years, as effort levels have declined (i.e., ~4000 boat-days annually), expected benefits from such closures are more modest. In essence, temporal closures offer greater benefit when fishing mortality rates are high. A spatial analysis of Brown Tiger Prawn catch and effort was also undertaken to obtain a better understanding of the prawn population dynamics. This indicated that, to improve profitability of the fishery, fishers could consider closing the fishery in the period from June to October, which is already a period of low profitability. This would protect the Brown Tiger Prawn spawning stock, increase catch rates of all species in the lucrative pre-Christmas period (November–December), and provide fishers with time to do vessel maintenance, arrange markets for the next season’s harvest, and, if they wish, work at other jobs. The analysis found that the instantaneous rate of total mortality (Z) for the March–June period did not vary significantly over the last two decades. As the Brown Tiger Prawn population in Moreton Bay has clearly increased over this time period, an interesting conclusion is that the instantaneous rate of natural mortality (M) must have increased, suggesting that tiger prawn natural mortality may be density-dependent at this time of year. Mortality rates of tiger prawns for June–October were found to have decreased over the last two decades, which has probably had a positive effect on spawning stocks in the October–November spawning period. Abiotic effects on the prawns The influence of air temperature, rainfall, freshwater flow, the southern oscillation index (SOI) and lunar phase on the catch rates of the four main prawn species were investigated. The analyses were based on over 200,000 daily logbook catch records over 23 years (i.e., 1988–2010). Freshwater flow was more influential than rainfall and SOI, and of the various sources of flow, the Brisbane River has the greatest volume and influence on Moreton Bay prawn catches. A number of time-lags were also considered. Flow in the preceding month prior to catch (i.e., 30 days prior, Logflow1_30) and two months prior (31–60 days prior, Logflow31_60) had strong positive effects on Banana Prawn catch rates. Average air temperature in the preceding 4-6 months (Temp121_180) also had a large positive effect on Banana Prawn catch rates. Flow in the month immediately preceding catch (Logflow1_30) had a strong positive influence on Greasyback Prawn catch rates. Air temperature in the preceding two months prior to catch (Temp1_60) had a large positive effect on Brown Tiger Prawn catch rates. No obvious or marked effects were detected for Eastern King Prawns, although interestingly, catch rates declined with increasing air temperature 4–6 months prior to catch. As most Eastern King Prawn catches in Moreton Bay occur in October to December, the results suggest catch rates decline with increasing winter temperatures. In most cases, the prawn catch rates declined with the waxing lunar phase (high luminance/full moon), and increased with the waning moon (low luminance/new moon). The SOI explains little additional variation in prawn catch rates (~ <2%), although its influence was higher for Banana Prawns. Extrapolating findings of the analyses to long-term climate change effects should be interpreted with caution. That said, the results are consistent with likely increases in abundance in the region for the two tropical species, Banana Prawns and Brown Tiger Prawns, as coastal temperatures rise. Conversely, declines in abundance could be expected for the two temperate species, Greasyback and Eastern King Prawns. Corporate management structures An examination of alternative governance systems was requested by the industry at one of the early meetings, particularly systems that may give them greater autonomy in decision making as well as help improve the marketing of their product. Consequently, a review of alternative management systems was undertaken, with a particular focus on the potential for self-management of small fisheries (small in terms of number of participants) and corporate management. The review looks at systems that have been implemented or proposed for other small fisheries internationally, with a particular focus on self-management as well as the potential benefits and challenges for corporate management. This review also highlighted particular opportunities for the Moreton Bay prawn fishery. Corporate management differs from other co-management and even self-management arrangements in that ‘ownership’ of the fishery is devolved to a company in which fishers and government are shareholders. The company manages the fishery as well as coordinates marketing to ensure that the best prices are received and that the catch taken meets the demands of the market. Coordinated harvesting will also result in increased profits, which are returned to fishers in the form of dividends. Corporate management offers many of the potential benefits of an individual quota system without formally implementing such a system. A corporate management model offers an advantage over a self-management model in that it can coordinate both marketing and management to take advantage of this unique geographical advantage. For such a system to be successful, the fishery needs to be relatively small and self- contained. Small in this sense is in terms of number of operators. The Moreton Bay prawn fishery satisfies these key conditions for a successful self-management and potentially corporate management system. The fishery is small both in terms of number of participants and geography. Unlike other fisheries that have progressed down the self-management route, the key market for the product from the Moreton Bay fishery is right at its doorstep. Corporate management also presents a number of challenges. First, it will require changes in the way fishers operate. In particular, the decision on when to fish and what to catch will be taken away from the individual and decided by the collective. Problems will develop if individuals do not join the corporation but continue to fish and market their own product separately. While this may seem an attractive option to fishers who believe they can do better independently, this is likely to be just a short- term advantage with an overall long-run cost to themselves as well as the rest of the industry. There are also a number of other areas that need further consideration, particularly in relation to the allocation of shares, including who should be allocated shares (e.g. just boat owners or also some employed skippers). Similarly, how harvesting activity is to be allocated by the corporation to the fishers. These are largely issues that cannot be answered without substantial consultation with those likely to be affected, and these groups cannot give these issues serious consideration until the point at which they are likely to become a reality. Given the current structure and complexity of the fishery, it is unlikely that such a management structure will be feasible in the short term. However, the fishery is a prime candidate for such a model, and development of such a management structure in the future should be considered as an option for the longer term.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Event-based systems are seen as good candidates for supporting distributed applications in dynamic and ubiquitous environments because they support decoupled and asynchronous many-to-many information dissemination. Event systems are widely used, because asynchronous messaging provides a flexible alternative to RPC (Remote Procedure Call). They are typically implemented using an overlay network of routers. A content-based router forwards event messages based on filters that are installed by subscribers and other routers. The filters are organized into a routing table in order to forward incoming events to proper subscribers and neighbouring routers. This thesis addresses the optimization of content-based routing tables organized using the covering relation and presents novel data structures and configurations for improving local and distributed operation. Data structures are needed for organizing filters into a routing table that supports efficient matching and runtime operation. We present novel results on dynamic filter merging and the integration of filter merging with content-based routing tables. In addition, the thesis examines the cost of client mobility using different protocols and routing topologies. We also present a new matching technique called temporal subspace matching. The technique combines two new features. The first feature, temporal operation, supports notifications, or content profiles, that persist in time. The second feature, subspace matching, allows more expressive semantics, because notifications may contain intervals and be defined as subspaces of the content space. We also present an application of temporal subspace matching pertaining to metadata-based continuous collection and object tracking.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The core aim of machine learning is to make a computer program learn from the experience. Learning from data is usually defined as a task of learning regularities or patterns in data in order to extract useful information, or to learn the underlying concept. An important sub-field of machine learning is called multi-view learning where the task is to learn from multiple data sets or views describing the same underlying concept. A typical example of such scenario would be to study a biological concept using several biological measurements like gene expression, protein expression and metabolic profiles, or to classify web pages based on their content and the contents of their hyperlinks. In this thesis, novel problem formulations and methods for multi-view learning are presented. The contributions include a linear data fusion approach during exploratory data analysis, a new measure to evaluate different kinds of representations for textual data, and an extension of multi-view learning for novel scenarios where the correspondence of samples in the different views or data sets is not known in advance. In order to infer the one-to-one correspondence of samples between two views, a novel concept of multi-view matching is proposed. The matching algorithm is completely data-driven and is demonstrated in several applications such as matching of metabolites between humans and mice, and matching of sentences between documents in two languages.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider a variant of the popular matching problem here. The input instance is a bipartite graph $G=(\mathcal{A}\cup\mathcal{P},E)$, where vertices in $\mathcal{A}$ are called applicants and vertices in $\mathcal{P}$ are called posts. Each applicant ranks a subset of posts in an order of preference, possibly involving ties. A matching $M$ is popular if there is no other matching $M'$ such that the number of applicants who prefer their partners in $M'$ to $M$ exceeds the number of applicants who prefer their partners in $M$ to $M'$. However, the “more popular than” relation is not transitive; hence this relation is not a partial order, and thus there need not be a maximal element here. Indeed, there are simple instances that do not admit popular matchings. The questions of whether an input instance $G$ admits a popular matching and how to compute one if it exists were studied earlier by Abraham et al. Here we study reachability questions among matchings in $G$, assuming that $G=(\mathcal{A}\cup\mathcal{P},E)$ admits a popular matching. A matching $M_k$ is reachable from $M_0$ if there is a sequence of matchings $\langle M_0,M_1,\dots,M_k\rangle$ such that each matching is more popular than its predecessor. Such a sequence is called a length-$k$ voting path from $M_0$ to $M_k$. We show an interesting property of reachability among matchings in $G$: there is always a voting path of length at most 2 from any matching to some popular matching. Given a bipartite graph $G=(\mathcal{A}\cup\mathcal{P},E)$ with $n$ vertices and $m$ edges and any matching $M_0$ in $G$, we give an $O(m\sqrt{n})$ algorithm to compute a shortest-length voting path from $M_0$ to a popular matching; when preference lists are strictly ordered, we have an $O(m+n)$ algorithm. This problem has applications in dynamic matching markets, where applicants and posts can enter and leave the market, and applicants can also change their preferences arbitrarily. After any change, the current matching may no longer be popular, in which case we are required to update it. However, our model demands that we switch from one matching to another only if there is consensus among the applicants to agree to the switch. Hence we need to update via a voting path that ends in a popular matching. Thus our algorithm has applications here.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Market microstructure is “the study of the trading mechanisms used for financial securities” (Hasbrouck (2007)). It seeks to understand the sources of value and reasons for trade, in a setting with different types of traders, and different private and public information sets. The actual mechanisms of trade are a continually changing object of study. These include continuous markets, auctions, limit order books, dealer markets, or combinations of these operating as a hybrid market. Microstructure also has to allow for the possibility of multiple prices. At any given time an investor may be faced with a multitude of different prices, depending on whether he or she is buying or selling, the quantity he or she wishes to trade, and the required speed for the trade. The price may also depend on the relationship that the trader has with potential counterparties. In this research, I touch upon all of the above issues. I do this by studying three specific areas, all of which have both practical and policy implications. First, I study the role of information in trading and pricing securities in markets with a heterogeneous population of traders, some of whom are informed and some not, and who trade for different private or public reasons. Second, I study the price discovery of stocks in a setting where they are simultaneously traded in more than one market. Third, I make a contribution to the ongoing discussion about market design, i.e. the question of which trading systems and ways of organizing trading are most efficient. A common characteristic throughout my thesis is the use of high frequency datasets, i.e. tick data. These datasets include all trades and quotes in a given security, rather than just the daily closing prices, as in traditional asset pricing literature. This thesis consists of four separate essays. In the first essay I study price discovery for European companies cross-listed in the United States. I also study explanatory variables for differences in price discovery. In my second essay I contribute to earlier research on two issues of broad interest in market microstructure: market transparency and informed trading. I examine the effects of a change to an anonymous market at the OMX Helsinki Stock Exchange. I broaden my focus slightly in the third essay, to include releases of macroeconomic data in the United States. I analyze the effect of these releases on European cross-listed stocks. The fourth and last essay examines the uses of standard methodologies of price discovery analysis in a novel way. Specifically, I study price discovery within one market, between local and foreign traders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During the last few decades there have been far going financial market deregulation, technical development, advances in information technology, and standardization of legislation between countries. As a result, one can expect that financial markets have grown more interlinked. The proper understanding of the cross-market linkages has implications for investment and risk management, diversification, asset pricing, and regulation. The purpose of this research is to assess the degree of price, return, and volatility linkages between both geographic markets and asset categories within one country, Finland. Another purpose is to analyze risk asymmetries, i.e., the tendency of equity risk to be higher after negative events than after positive events of equal magnitude. The analysis is conducted both with respect to total risk (volatility), and systematic risk (beta). The thesis consists of an introductory part and four essays. The first essay studies to which extent international stock prices comove. The degree of comovements is low, indicating benefits from international diversification. The second essay examines the degree to which the Finnish market is linked to the “world market”. The total risk is divided into two parts, one relating to world factors, and one relating to domestic factors. The impact of world factors has increased over time. After 1993, when foreign investors were allowed to freely invest in Finnish assets, the risk level has been higher than previously. This was also the case during the economic recession in the beginning of the 1990’s. The third essay focuses on the stock, bond, and money markets in Finland. According to a trading model, the degree of volatility linkages should be strong. However, the results contradict this. The linkages are surprisingly weak, even negative. The stock market is the most independent, while the money market is affected by events on the two other markets. The fourth essay concentrates on volatility and beta asymmetries. Contrary to many international studies there are only few cases of risk asymmetries. When they occur, they tend to be driven by the market-wide component rather than the portfolio specific element.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this dissertation is to model economic variables by a mixture autoregressive (MAR) model. The MAR model is a generalization of linear autoregressive (AR) model. The MAR -model consists of K linear autoregressive components. At any given point of time one of these autoregressive components is randomly selected to generate a new observation for the time series. The mixture probability can be constant over time or a direct function of a some observable variable. Many economic time series contain properties which cannot be described by linear and stationary time series models. A nonlinear autoregressive model such as MAR model can a plausible alternative in the case of these time series. In this dissertation the MAR model is used to model stock market bubbles and a relationship between inflation and the interest rate. In the case of the inflation rate we arrived at the MAR model where inflation process is less mean reverting in the case of high inflation than in the case of normal inflation. The interest rate move one-for-one with expected inflation. We use the data from the Livingston survey as a proxy for inflation expectations. We have found that survey inflation expectations are not perfectly rational. According to our results information stickiness play an important role in the expectation formation. We also found that survey participants have a tendency to underestimate inflation. A MAR model has also used to model stock market bubbles and crashes. This model has two regimes: the bubble regime and the error correction regime. In the error correction regime price depends on a fundamental factor, the price-dividend ratio, and in the bubble regime, price is independent of fundamentals. In this model a stock market crash is usually caused by a regime switch from a bubble regime to an error-correction regime. According to our empirical results bubbles are related to a low inflation. Our model also imply that bubbles have influences investment return distribution in both short and long run.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many cases, a mobile user has the option of connecting to one of several IEEE 802.11 access points (APs),each using an independent channel. User throughput in each AP is determined by the number of other users as well as the frame size and physical rate being used. We consider the scenario where users could multihome, i.e., split their traffic amongst all the available APs, based on the throughput they obtain and the price charged. Thus, they are involved in a non-cooperative game with each other. We convert the problem into a fluid model and show that under a pricing scheme, which we call the cost price mechanism, the total system throughput is maximized,i.e., the system suffers no loss of efficiency due to selfish dynamics. We also study the case where the Internet Service Provider (ISP) could charge prices greater than that of the cost price mechanism. We show that even in this case multihoming outperforms unihoming, both in terms of throughput as well as profit to the ISP.