日韩av黄I国产麻豆传媒I国产91av视频在线观看I日韩一区二区三区在线看I美女国产在线I麻豆视频国产在线观看I成人黄色短片

歡迎訪問 生活随笔!

生活随笔

當前位置: 首頁 > 编程资源 > 编程问答 >内容正文

编程问答

The LMAX disruptor Architecture--转载

發布時間:2025/4/5 编程问答 43 豆豆
生活随笔 收集整理的這篇文章主要介紹了 The LMAX disruptor Architecture--转载 小編覺得挺不錯的,現在分享給大家,幫大家做個參考.

原文地址:

LMAX is a new retail financial trading platform. As a result it has to process many trades with low latency. The system is built on the JVM platform and centers on a Business Logic Processor that can handle 6 million orders per second on a single thread. The Business Logic Processor runs entirely in-memory using event sourcing. The Business Logic Processor is surrounded by Disruptors - a concurrency component that implements a network of queues that operate without needing locks. During the design process the team concluded that recent directions in high-performance concurrency models using queues are fundamentally at odds with modern CPU design.

Over the last few years we keep hearing that "the free lunch is over"[1]?- we can't expect increases in individual CPU speed. So to write fast code we need to explicitly use multiple processors with concurrent software. This is not good news - writing concurrent code is very hard. Locks and semaphores are hard to reason about and hard to test - meaning we are spending more time worrying about satisfying the computer than we are solving the domain problem. Various concurrency models, such as Actors and Software Transactional Memory, aim to make this easier - but there is still a burden that introduces bugs and complexity.

So I was fascinated to hear about a talk at QCon London in March last year from LMAX. LMAX is a new retail financial trading platform. Its business innovation is that it is a?retail?platform - allowing anyone to trade in a range of financial derivative products[2]. A trading platform like this needs very low latency - trades have to be processed quickly because the market is moving rapidly. A retail platform adds complexity because it has to do this for lots of people. So the result is more users, with lots of trades, all of which need to be processed quickly.[3]

Given the shift to multi-core thinking, this kind of demanding performance would naturally suggest an explicitly concurrent programming model - and indeed this was their starting point. But the thing that got people's attention at QCon was that this wasn't where they ended up. In fact they ended up by doing all the business logic for their platform: all trades, from all customers, in all markets - on a single thread. A thread that will process 6 million orders per second using commodity hardware.[4]

Processing lots of transactions with low-latency and none of the complexities of concurrent code - how can I resist digging into that? Fortunately another difference LMAX has to other financial companies is that they are quite happy to talk about their technological decisions. So now LMAX has been in production for a while it's time to explore their fascinating design.


Overall Structure

Figure 1: LMAX's architecture in three blobs

At a top level, the architecture has three parts

  • business logic processor[5]
  • input disruptor
  • output disruptors

As its name implies, the business logic processor handles all the business logic in the application. As I indicated above, it does this as a single-threaded java program which reacts to method calls and produces output events. Consequently it's a simple java program that doesn't require any platform frameworks to run other than the JVM itself, which allows it to be easily run in test environments.

Although the Business Logic Processor can run in a simple environment for testing, there is rather more involved choreography to get it to run in a production setting. Input messages need to be taken off a network gateway and unmarshaled, replicated and journaled. Output messages need to be marshaled for the network. These tasks are handled by the input and output disruptors. Unlike the Business Logic Processor, these are concurrent components, since they involve IO operations which are both slow and independent. They were designed and built especially for LMAX, but they (like the overall architecture) are applicable elsewhere.


Business Logic Processor

Keeping it all in memory

The Business Logic Processor takes input messages sequentially (in the form of a method invocation), runs business logic on it, and emits output events. It operates entirely in-memory, there is no database or other persistent store. Keeping all data in-memory has two important benefits. Firstly it's fast - there's no database to provide slow IO to access, nor is there any transactional behavior to execute since all the processing is done sequentially. The second advantage is that it simplifies programming - there's no object/relational mapping to do. All the code can be written using Java's object model without having to make any compromises for the mapping to a database.

Using an in-memory structure has an important consequence - what happens if everything crashes? Even the most resilient systems are vulnerable to someone pulling the power. The heart of dealing with this is?Event Sourcing?- which means that the current state of the Business Logic Processor is entirely derivable by processing the input events. As long as the input event stream is kept in a durable store (which is one of the jobs of the input disruptor) you can always recreate the current state of the business logic engine by replaying the events.

A good way to understand this is to think of a version control system. Version control systems are a sequence of commits, at any time you can build a working copy by applying those commits. VCSs are more complicated than the Business Logic Processor because they must support branching, while the Business Logic Processor is a simple sequence.

So, in theory, you can always rebuild the state of the Business Logic Processor by reprocessing all the events. In practice, however, that would take too long should you need to spin one up. So, just as with version control systems, LMAX can make snapshots of the Business Logic Processor state and restore from the snapshots. They take a snapshot every night during periods of low activity. Restarting the Business Logic Processor is fast, a full restart - including restarting the JVM, loading a recent snapshot, and replaying a days worth of journals - takes less than a minute.

Snapshots make starting up a new Business Logic Processor faster, but not quickly enough should a Business Logic Processor crash at 2pm. As a result LMAX keeps multiple Business Logic Processors running all the time[6]. Each input event is processed by multiple processors, but all but one processor has its output ignored. Should the live processor fail, the system switches to another one. This ability to handle fail-over is another benefit of using Event Sourcing.

By event sourcing into replicas they can switch between processors in a matter of micro-seconds. As well as taking snapshots every night, they also restart the Business Logic Processors every night. The replication allows them to do this with no downtime, so they continue to process trades 24/7.

For more background on Event Sourcing, see the?draft pattern?on my site from a few years ago. The article is more focused on handling temporal relationships rather than the benefits that LMAX use, but it does explain the core idea.

Event Sourcing is valuable because it allows the processor to run entirely in-memory, but it has another considerable advantage for diagnostics. If some unexpected behavior occurs, the team copies the sequence of events to their development environment and replays them there. This allows them to examine what happened much more easily than is possible in most environments.

This diagnostic capability extends to business diagnostics. There are some business tasks, such as in risk management, that require significant computation that isn't needed for processing orders. An example is getting a list of the top 20 customers by risk profile based on their current trading positions. The team handles this by spinning up a replicate domain model and carrying out the computation there, where it won't interfere with the core order processing. These analysis domain models can have variant data models, keep different data sets in memory, and run on different machines.

Tuning performance

So far I've explained that the key to the speed of the Business Logic Processor is doing everything sequentially, in-memory. Just doing this (and nothing really stupid) allows developers to write code that can process 10K TPS[7]. They then found that concentrating on the simple elements of good code could bring this up into the 100K TPS range. This just needs well-factored code and small methods - essentially this allows Hotspot to do a better job of optimizing and for CPUs to be more efficient in caching the code as it's running.

It took a bit more cleverness to go up another order of magnitude. There are several things that the LMAX team found helpful to get there. One was to write custom implementations of the java collections that were designed to be cache-friendly and careful with garbage[8]. An example of this is using primitive java longs as hashmap keys with a specially written array backed Map implementation (LongToObjectHashMap). In general they've found that choice of data structures often makes a big difference, Most programmers just grab whatever List they used last time rather than thinking which implementation is the right one for this context.[9]

Another technique to reach that top level of performance is putting attention into performance testing. I've long noticed that people talk a lot about techniques to improve performance, but the one thing that really makes a difference is to test it. Even good programmers are very good at constructing performance arguments that end up being wrong, so the best programmers prefer profilers and test cases to speculation.[10]?The LMAX team has also found that writing tests first is a very effective discipline for performance tests.

Programming Model

This style of processing does introduce some constraints into the way you write and organize the business logic. The first of these is that you have to tease out any interaction with external services. An external service call is going to be slow, and with a single thread will halt the entire order processing machine. As a result you can't make calls to external services within the business logic. Instead you need to finish that interaction with an output event, and wait for another input event to pick it back up again.

I'll use a simple non-LMAX example to illustrate. Imagine you are making an order for jelly beans by credit card. A simple retailing system would take your order information, use a credit card validation service to check your credit card number, and then confirm your order - all within a single operation. The thread processing your order would block while waiting for the credit card to be checked, but that block wouldn't be very long for the user, and the server can always run another thread on the processor while it's waiting.

In the LMAX architecture, you would split this operation into two. The first operation would capture the order information and finish by outputting an event (credit card validation requested) to the credit card company. The Business Logic Processor would then carry on processing events for other customers until it received a credit-card-validated event in its input event stream. On processing that event it would carry out the confirmation tasks for that order.

Working in this kind of event-driven, asynchronous style, is somewhat unusual - although using asynchrony to improve the responsiveness of an application is a familiar technique. It also helps the business process be more resilient, as you have to be more explicit in thinking about the different things that can happen with the remote application.

A second feature of the programming model lies in error handling. The traditional model of sessions and database transactions provides a helpful error handling capability. Should anything go wrong, it's easy to throw away everything that happened so far in the interaction. Session data is transient, and can be discarded, at the cost of some irritation to the user if in the middle of something complicated. If an error occurs on the database side you can rollback the transaction.

LMAX's in-memory structures are persistent across input events, so if there is an error it's important to not leave that memory in an inconsistent state. However there's no automated rollback facility. As a consequence the LMAX team puts a lot of attention into ensuring the input events are fully valid before doing any mutation of the in-memory persistent state. They have found that testing is a key tool in flushing out these kinds of problems before going into production.


Input and Output Disruptors

Although the business logic occurs in a single thread, there are a number tasks to be done before we can invoke a business object method. The original input for processing comes off the wire in the form of a message, this message needs to be unmarshaled into a form convenient for Business Logic Processor to use. Event Sourcing relies on keeping a durable journal of all the input events, so each input message needs to be journaled onto a durable store. Finally the architecture relies on a cluster of Business Logic Processors, so we have to replicate the input messages across this cluster. Similarly on the output side, the output events need to be marshaled for transmission over the network.

Figure 2: The activities done by the input disruptor (using UML activity diagram notation)

The replicator and journaler involve IO and therefore are relatively slow. After all the central idea of Business Logic Processor is that it avoids doing any IO. Also these three tasks are relatively independent, all of them need to be done before the Business Logic Processor works on a message, but they can done in any order. So unlike with the Business Logic Processor, where each trade changes the market for subsequent trades, there is a natural fit for concurrency.

To handle this concurrency the LMAX team developed a special concurrency component, which they call a?Disruptor[11].

The LMAX team have released the?source code for the Disruptor?with an open source licence.

At a crude level you can think of a Disruptor as a multicast graph of queues where producers put objects on it that are sent to all the consumers for parallel consumption through separate downstream queues. When you look inside you see that this network of queues is really a single data structure - a ring buffer. Each producer and consumer has a sequence counter to indicate which slot in the buffer it's currently working on. Each producer/consumer writes its own sequence counter but can read the others' sequence counters. This way the producer can read the consumers' counters to ensure the slot it wants to write in is available without any locks on the counters. Similarly a consumer can ensure it only processes messages once another consumer is done with it by watching the counters.

Figure 3: The input disruptor coordinates one producer and four consumers

Output disruptors are similar but they only have two sequential consumers for marshaling and output.[12]?Output events are organized into several topics, so that messages can be sent to only the receivers who are interested in them. Each topic has its own disruptor.

The disruptors I've described are used in a style with one producer and multiple consumers, but this isn't a limitation of the design of the disruptor. The disruptor can work with multiple producers too, in this case it still doesn't need locks.[13]

A benefit of the disruptor design is that it makes it easier for consumers to catch up quickly if they run into a problem and fall behind. If the unmarshaler has a problem when processing on slot 15 and returns when the receiver is on slot 31, it can read data from slots 16-30 in one batch to catch up. This batch read of the data from the disruptor makes it easier for lagging consumers to catch up quickly, thus reducing overall latency.

I've described things here, with one each of the journaler, replicator, and unmarshaler - this indeed is what LMAX does. But the design would allow multiple of these components to run. If you ran two journalers then one would take the even slots and the other journaler would take the odd slots. This allows further concurrency of these IO operations should this become necessary.

The ring buffers are large: 20 million slots for input buffer and 4 million slots for each of the output buffers. The sequence counters are 64bit long integers that increase monotonically even as the ring slots wrap.[14]?The buffer is set to a size that's a power of two so the compiler can do an efficient modulus operation to map from the sequence counter number to the slot number. Like the rest of the system, the disruptors are bounced overnight. This bounce is mainly done to wipe memory so that there is less chance of an expensive garbage collection event during trading. (I also think it's a good habit to regularly restart, so that you rehearse how to do it for emergencies.)

The journaler's job is to store all the events in a durable form, so that they can be replayed should anything go wrong. LMAX does not use a database for this, just the file system. They stream the events onto the disk. In modern terms, mechanical disks are horribly slow for random access, but very fast for streaming - hence the tag-line "disk is the new tape".[15]

Earlier on I mentioned that LMAX runs multiple copies of its system in a cluster to support rapid failover. The replicator keeps these nodes in sync. All communication in LMAX uses IP multicasting, so clients don't need to know which IP address is the master node. Only the master node listens directly to input events and runs a replicator. The replicator broadcasts the input events to the slave nodes. Should the master node go down, it's lack of heartbeat will be noticed, another node becomes master, starts processing input events, and starts its replicator. Each node has its own input disruptor and thus has its own journal and does its own unmarshaling.

Even with IP multicasting, replication is still needed because IP messages can arrive in a different order on different nodes. The master node provides a deterministic sequence for the rest of the processing.

The unmarshaler turns the event data from the wire into a java object that can be used to invoke behavior on the Business Logic Processor. Therefore, unlike the other consumers, it needs to modify the data in the ring buffer so it can store this unmarshaled object. The rule here is that consumers are permitted to write to the ring buffer, but each writable field can only have one parallel consumer that's allowed to write to it. This preserves the principle of only having a single writer.?[16]

Figure 4: The LMAX architecture with the disruptors expanded

The disruptor is a general purpose component that can be used outside of the LMAX system. Usually financial companies are very secretive about their systems, keeping quiet even about items that aren't germane to their business. Not just has LMAX been open about its overall architecture, they have open-sourced the disruptor code - an act that makes me very happy. Not just will this allow other organizations to make use of the disruptor, it will also allow for more testing of its concurrency properties.


Queues and their lack of mechanical sympathy

The LMAX architecture caught people's attention because it's a very different way of approaching a high performance system to what most people are thinking about. So far I've talked about how it works, but haven't delved too much into why it was developed this way. This tale is interesting in itself, because this architecture didn't just appear. It took a long time of trying more conventional alternatives, and realizing where they were flawed, before the team settled on this one.

Most business systems these days have a core architecture that relies on multiple active sessions coordinated through a transactional database. The LMAX team were familiar with this approach, and confident that it wouldn't work for LMAX. This assessment was founded in the experiences of Betfair - the parent company who set up LMAX. Betfair is a betting site that allows people to bet on sporting events. It handles very high volumes of traffic with a lot of contention - sports bets tend to burst around particular events. To make this work they have one of the hottest database installations around and have had to do many unnatural acts in order to make it work. Based on this experience they knew how difficult it was to maintain Betfair's performance and were sure that this kind of architecture would not work for the very low latency that a trading site would require. As a result they had to find a different approach.

Their initial approach was to follow what so many are saying these days - that to get high performance you need to use explicit concurrency. For this scenario, this means allowing orders to be processed by multiple threads in parallel. However, as is often the case with concurrency, the difficulty comes because these threads have to communicate with each other. Processing an order changes market conditions and these conditions need to be communicated.

The approach they explored early on was the Actor model and its cousin SEDA. The Actor model relies on independent, active objects with their own thread that communicate with each other via queues. Many people find this kind of concurrency model much easier to deal with than trying to do something based on locking primitives.

The team built a prototype exchange using the actor model and did performance tests on it. What they found was that the processors spent more time managing queues than doing the real logic of the application. Queue access was a bottleneck.

When pushing performance like this, it starts to become important to take account of the way modern hardware is constructed. The phrase Martin Thompson likes to use is "mechanical sympathy". The term comes from race car driving and it reflects the driver having an innate feel for the car, so they are able to feel how to get the best out of it. Many programmers, and I confess I fall into this camp, don't have much mechanical sympathy for how programming interacts with hardware. What's worse is that many programmers think they have mechanical sympathy, but it's built on notions of how hardware used to work that are now many years out of date.

One of the dominant factors with modern CPUs that affects latency, is how the CPU interacts with memory. These days going to main memory is a very slow operation in CPU-terms. CPUs have multiple levels of cache, each of which of is significantly faster. So to increase speed you want to get your code and data in those caches.

At one level, the actor model helps here. You can think of an actor as its own object that clusters code and data, which is a natural unit for caching. But actors need to communicate, which they do through queues - and the LMAX team observed that it's the queues that interfere with caching.

The explanation runs like this: in order to put some data on a queue, you need to write to that queue. Similarly, to take data off the queue, you need to write to the queue to perform the removal. This is write contention - more than one client may need to write to the same data structure. To deal with the write contention a queue often uses locks. But if a lock is used, that can cause a context switch to the kernel. When this happens the processor involved is likely to lose the data in its caches.

The conclusion they came to was that to get the best caching behavior, you need a design that has only one core writing to any memory location[17]. Multiple readers are fine, processors often use special high-speed links between their caches. But queues fail the one-writer principle.

This analysis led the LMAX team to a couple of conclusions. Firstly it led to the design of the disruptor, which determinedly follows the single-writer constraint. Secondly it led to idea of exploring the single-threaded business logic approach, asking the question of how fast a single thread can go if it's freed of concurrency management.

The essence of working on a single thread, is to ensure that you have one thread running on one core, the caches warm up, and as much memory access as possible goes to the caches rather than to main memory. This means that both the code and the working set of data needs to be as consistently accessed as possible. Also keeping small objects with code and data together allows them to be swapped between the caches as a unit, simplifying the cache management and again improving performance.

An essential part of the path to the LMAX architecture was the use of performance testing. The consideration and abandonment of an actor-based approach came from building and performance testing a prototype. Similarly much of the steps in improving the performance of the various components were enabled by performance tests. Mechanical sympathy is very valuable - it helps to form hypotheses about what improvements you can make, and guides you to forward steps rather than backward ones - but in the end it's the testing gives you the convincing evidence.

Performance testing in this style, however, is not a well-understood topic. Regularly the LMAX team stresses that coming up with meaningful performance tests is often harder than developing the production code. Again mechanical sympathy is important to developing the right tests. Testing a low level concurrency component is meaningless unless you take into account the caching behavior of the CPU.

One particular lesson is the importance of writing tests against null components to ensure the performance test is fast enough to really measure what real components are doing. Writing fast test code is no easier than writing fast production code and it's too easy to get false results because the test isn't as fast as the component it's trying to measure.


Should you use this architecture?

At first glance, this architecture appears to be for a very small niche. After all the driver that led to it was to be able to run lots of complex transactions with very low latency - most applications don't need to run at 6 million TPS.

But the thing that fascinates me about this application, is that they have ended up with a design which removes much of the programming complexity that plagues many software projects. The traditional model of concurrent sessions surrounding a transactional database isn't free of hassles. There's usually a non-trivial effort that goes into the relationship with the database. Object/relational mapping tools can help much of the pain of dealing with a database, but it doesn't deal with it all. Most performance tuning of enterprise applications involves futzing around with SQL.

These days, you can get more main memory into your servers than us old guys could get as disk space. More and more applications are quite capable of putting all their working set in main memory - thus eliminating a source of both complexity and sluggishness. Event Sourcing provides a way to solve the durability problem for an in-memory system, running everything in a single thread solves the concurrency issue. The LMAX experience suggests that as long as you need less than a few million TPS, you'll have enough performance headroom.

There is a considerable overlap here with the growing interest in?CQRS. An event sourced, in-memory processor is a natural choice for the command-side of a CQRS system. (Although the LMAX team does not currently use CQRS.)

So what indicates you shouldn't go down this path? This is always a tricky questions for little-known techniques like this, since the profession needs more time to explore its boundaries. A starting point, however, is to think of the characteristics that encourage the architecture.

One characteristic is that this is a connected domain where processing one transaction always has the potential to change how following ones are processed. With transactions that are more independent of each other, there's less need to coordinate, so using separate processors running in parallel becomes more attractive.

LMAX concentrates on figuring the consequences of how events change the world. Many sites are more about taking an existing store of information and rendering various combinations of that information to as many eyeballs as they can find - eg think of any media site. Here the architectural challenge often centers on getting your caches right.

Another characteristic of LMAX is that this is a backend system, so it's reasonable to consider how applicable it would be for something acting in an interactive mode. Increasingly web application are helping us get used to server systems that react to requests, an aspect that does fit in well with this architecture. Where this architecture goes further than most such systems is its absolute use of asynchronous communications, resulting in the changes to the programming model that I outlined earlier.

These changes will take some getting used to for most teams. Most people tend to think of programming in synchronous terms and are not used to dealing with asynchrony. Yet it's long been true that asynchronous communication is an essential tool for responsiveness. It will be interesting to see if the wider use of asynchronous communication in the javascript world, with AJAX and node.js, will encourage more people to investigate this style. The LMAX team found that while it took a bit of time to adjust to asynchronous style, it soon became natural and often easier. In particular error handling was much easier to deal with under this approach.

The LMAX team certainly feels that the days of the coordinating transactional database are numbered. The fact that you can write software more easily using this kind of architecture and that it runs more quickly removes much of the justification for the traditional central database.

For my part, I find this a very exciting story. Much of my goal is to concentrate on software that models complex domains. An architecture like this provides good separation of concerns, allowing people to focus on Domain-Driven Design and keeping much of the platform complexity well separated. The close coupling between domain objects and databases has always been an irritation - approaches like this suggest a way out.

轉載于:https://www.cnblogs.com/davidwang456/p/4582445.html

總結

以上是生活随笔為你收集整理的The LMAX disruptor Architecture--转载的全部內容,希望文章能夠幫你解決所遇到的問題。

如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。

激情五月在线观看 | 日韩av电影网站在线观看 | 欧美a√大片| 韩国三级在线一区 | 不卡中文字幕av | 国产亚洲精品女人久久久久久 | 色视频在线观看 | 99精品视频播放 | 亚洲午夜小视频 | 久久久国产成人 | 久草网在线观看 | 中文字幕av影院 | 亚洲精品久久久蜜臀下载官网 | 人人玩人人添人人澡97 | 国产99精品在线观看 | 99精品国产成人一区二区 | 深爱激情综合网 | 欧美极品少妇xxxxⅹ欧美极品少妇xxxx亚洲精品 | 久久精品亚洲综合专区 | 91看片一区二区三区 | 久久综合之合合综合久久 | 日韩二区精品 | 色黄视频免费观看 | 精品国产伦一区二区三区免费 | 人人插人人艹 | 中文字幕日韩电影 | 日韩欧美视频免费在线观看 | 免费看一级黄色大全 | 99re6热在线精品视频 | 婷婷伊人网 | 五月天高清欧美mv | 99看视频在线观看 | 在线观看亚洲成人 | 久草免费在线 | 亚洲精品短视频 | 99久热在线精品视频成人一区 | 波多野结衣视频一区 | 香蕉影院在线观看 | 91精品国产自产在线观看 | 日韩精品一区二区在线观看 | 毛片基地黄久久久久久天堂 | 一区二区三区四区在线免费观看 | 91精品久久久久久综合五月天 | 久久久久久毛片精品免费不卡 | 91大神精品视频 | 国产在线 一区二区三区 | 国产精品一区二区免费视频 | 久久精品国产成人 | 久久久久久久久久电影 | 国产一级精品在线观看 | 97香蕉久久国产在线观看 | 亚洲精品美女免费 | 日韩乱码中文字幕 | 国产91亚洲精品 | 中文字幕免费高清在线观看 | 亚洲国产一区二区精品专区 | 日韩三级视频 | 91视频网址入口 | 亚洲无吗av | 激情 一区二区 | 菠萝菠萝蜜在线播放 | 久久成人人人人精品欧 | 欧美在线一二 | 久久人人爽爽人人爽人人片av | 亚洲免费观看在线视频 | 丁香婷婷深情五月亚洲 | 在线国产视频 | 2017狠狠干 | 久久午夜国产精品 | 激情视频一区二区 | 国产精品a久久 | 又紧又大又爽精品一区二区 | 欧美一级高清片 | 精品视频不卡 | 午夜色大片在线观看 | 国产成人久久av免费高清密臂 | 99在线精品视频在线观看 | 久久视频在线免费观看 | 日本久久片| www久久com| 国产精品久久99精品毛片三a | 国产成人在线免费观看 | 99精品视频在线观看免费 | 国产精品男女视频 | 国产午夜三级一二三区 | 狠狠躁夜夜躁人人爽视频 | a视频免费 | 久久欧美精品 | 亚洲桃花综合 | 狠狠的日日| 四虎永久网站 | 草久在线播放 | 五月天久久激情 | 中文字幕乱偷在线 | 狠狠色丁香久久婷婷综合丁香 | 久久免费a | 免费视频久久 | 国产精品综合av一区二区国产馆 | 奇米影视在线99精品 | 黄色一级片视频 | 91九色网站 | 99视频精品在线 | 国产 精品 资源 | 久久国产精品久久国产精品 | 成人免费毛片aaaaaa片 | 中文字幕在线观看2018 | 婷婷色在线观看 | av看片网 | 久久a视频| 日韩精品久久一区二区 | 91高清免费 | 日本在线观看一区 | 日本黄区免费视频观看 | 91精品啪在线观看国产 | 在线观看黄网站 | av高清免费在线 | 欧美日韩一区二区在线 | 精品国产一区二区三区日日嗨 | 免费日韩 精品中文字幕视频在线 | 欧美一区二区三区在线看 | 一区二区三区高清不卡 | 久草在线资源观看 | 99久热在线精品 | 在线观看aaa | 欧洲精品码一区二区三区免费看 | 91成人亚洲| 国产麻豆精品久久 | 久久久久久久久久久国产精品 | 日本黄色免费播放 | 国产精品黑丝在线观看 | 国产精品va最新国产精品视频 | 久久免费电影 | 国产在线观看高清视频 | 国内一区二区视频 | 国产精品成人免费 | 国产高清久久 | 中文字幕av电影下载 | 黄色亚洲免费 | 中文字幕日韩精品有码视频 | 久久在线影院 | 国产精品九九九九九九 | 99精品网站 | 久久精品电影网 | 国内精品久久久久久 | 超碰在线亚洲 | 中文字幕一区二区三区四区在线视频 | 中文在线免费看视频 | av成人在线观看 | 99热日本 | 中文字幕中文字幕在线中文字幕三区 | 久久国产精品久久w女人spa | 国产成人精品一区二区 | 国产精品久久久久久999 | 黄色www| 高清一区二区三区 | 日本精品视频在线播放 | 国产精品一区二区在线免费观看 | 亚洲欧美日韩精品一区二区 | 免费看黄20分钟 | 黄色成人av在线 | 中文字幕一区三区 | 四虎www| 精品91视频 | 日韩中文字幕视频在线 | 久久精品伊人 | 亚洲精品在线免费看 | 中文字幕一区二区三区在线视频 | 国产成人一区二区三区在线观看 | 免费v片 | 最新日韩视频 | 午夜电影av | 亚洲精品高清一区二区三区四区 | 久久精品国产亚洲精品 | 日韩三级视频 | 激情欧美一区二区免费视频 | 日本黄色大片儿 | 亚洲电影一级黄 | 在线成人高清电影 | 成年人视频在线免费观看 | 丰满少妇高潮在线观看 | 成人国产精品一区 | 天堂v中文 | 91资源在线免费观看 | 草久视频在线观看 | 亚洲精品中文字幕在线观看 | 午夜免费视频网站 | 久久国产精品99久久久久 | 国产精品视频app | 色网站免费在线观看 | 一区在线观看视频 | 最近中文字幕在线中文高清版 | 国产高清永久免费 | 97超碰人人模人人人爽人人爱 | 天天操综| 精品一区二区综合 | 日韩免费视频观看 | 免费在线观看的av网站 | 欧美精品久久久久久久久久丰满 | 黄色国产区 | 日韩免费在线网站 | 亚洲视频在线播放 | 91福利视频一区 | 97电影在线看视频 | 伊人超碰在线 | 国产精品午夜在线观看 | 夜夜婷婷 | 成人久久精品视频 | 国产五月 | 91视频a | 亚洲黄色av一区 | 久久久久高清 | 91视频在线免费下载 | 欧美精品视 | 人人爽人人爽人人片av | 色婷婷久久久 | 2024国产在线| av免费观看网站 | 亚洲网站在线看 | 久久综合婷婷国产二区高清 | 日韩性片| 亚洲最新视频在线播放 | 久久99这里只有精品 | 欧美一区二区伦理片 | 在线观看免费av片 | 探花视频在线版播放免费观看 | www.伊人网 | 久久99精品久久久久久久久久久久 | 人人澡澡人人 | 欧美日韩在线免费观看视频 | 国产剧情av在线播放 | 夜夜骑首页 | av网站免费线看精品 | 亚洲情婷婷 | 99热精品国产一区二区在线观看 | 日韩一级电影在线 | 日韩专区中文字幕 | 久久99精品视频 | 亚洲国产mv| 久久国产精品色av免费看 | 国产高清视频在线播放一区 | a级国产片| 91亚洲综合 | 99久久精品国产亚洲 | 中文字幕成人网 | 国产精品一区二区三区视频免费 | 九九国产精品视频 | 国产自制av | 91精品一区在线观看 | 97在线观看免费高清完整版在线观看 | 亚洲h在线播放在线观看h | 91久久久国产精品 | 免费成人在线网站 | 在线看国产 | 免费看三级黄色片 | 天天综合网 天天综合色 | 日日操狠狠干 | 日韩在线观看视频免费 | 91在线入口| 99热最新| 天堂在线一区二区 | 国产日韩在线视频 | 欧美性生活大片 | 国产裸体视频bbbbb | 超碰人人在线观看 | 欧洲视频一区 | 三日本三级少妇三级99 | 免费看的黄色的网站 | 亚洲欧美视频一区二区三区 | 91插插视频 | 香蕉在线影院 | 国产一区二区成人 | 国产原创在线视频 | 日韩电影中文字幕在线 | 精品国产视频在线观看 | www黄色大片 | 99精品热视频只有精品10 | 在线小视频国产 | 狠狠狠色丁香综合久久天下网 | 国产不卡一 | 一区三区视频在线观看 | 丁香电影小说免费视频观看 | 深爱激情五月婷婷 | 免费在线观看av的网站 | 激情视频在线高清看 | 日韩乱码中文字幕 | 国产xvideos免费视频播放 | 久久人人爽人人人人片 | 久久久高清一区二区三区 | 日韩三级免费观看 | 最新精品视频在线 | 九九九视频在线 | 伊人永久在线 | 精品欧美乱码久久久久久 | 99久久久国产精品免费99 | 成人毛片在线视频 | 久久你懂的| 黄色毛片电影 | 国精产品满18岁在线 | 欧美日韩在线视频一区二区 | 婷婷婷国产在线视频 | 亚洲激情 | 久久久久免费网 | 91av99| 日韩在线大片 | 一区二区三区精品久久久 | 亚洲一区二区视频 | 四虎在线免费视频 | 国产成人精品三级 | 男女精品久久 | 国产精品久久久久久久久久久久午夜 | 成人av网页 | 天天干天天摸 | 久久艹精品 | 人人澡超碰碰97碰碰碰软件 | 青春草视频 | 四虎成人免费影院 | 国产色在线 | 婷婷综合 | 国产成人免费网站 | 黄色免费在线看 | 久久精品国产精品 | 中文一区在线 | 99久久久国产精品免费99 | 久在线观看 | 精品久久久久久亚洲综合网站 | 懂色av懂色av粉嫩av分享吧 | av日韩av| 激情视频网页 | 久久久久麻豆 | 国产色视频网站2 | 91免费高清视频 | 亚洲h视频在线 | 五月婷综合 | 国产在线精品一区二区不卡了 | 日本一区二区三区视频在线播放 | 日本最新高清不卡中文字幕 | 久久99精品久久久久久久久久久久 | 99视频精品| 色妞久久福利网 | 美女久久99 | 久久婷婷色综合 | 久草在线免费色站 | 99久久日韩精品免费热麻豆美女 | 美女久久久久 | 日本精品视频一区 | 免费十分钟| 国产精品综合av一区二区国产馆 | 亚洲精品99 | 国产精品国产三级国产aⅴ入口 | 久久免费视频在线 | 91免费的视频在线播放 | 色偷偷人人澡久久超碰69 | 黄色在线小网站 | 国产精品免费视频网站 | 国语自产偷拍精品视频偷 | 久久99精品一区二区三区三区 | 国产精品成人一区二区 | 日韩精品一区在线观看 | 久久夜色精品国产欧美乱 | 成人免费xyz网站 | 午夜18视频在线观看 | 久久久国产精品亚洲一区 | 亚洲精品国产品国语在线 | av片在线观看 | 探花视频在线观看 | 综合久久久久久久 | 国产伦精品一区二区三区… | 午夜 久久 tv | 美女中文字幕 | 国产精品99久久久久久久久 | 玖玖国产精品视频 | 午夜精品久久久久久中宇69 | 婷婷射五月 | 999成人 | 五月天狠狠操 | 久久国产高清 | 亚洲美女在线国产 | 亚洲女人天堂成人av在线 | 黄色三级视频片 | 99c视频在线 | 国产成人一区二区三区久久精品 | 在线看片日韩 | 一区二区三区免费在线播放 | 日韩黄在线观看 | 国产日韩中文在线 | 欧美一区二区三区在线 | 在线观看视频你懂的 | 中文字幕一区二区三区精华液 | 中文字幕在线免费看线人 | 99情趣网视频 | 久久精品国产一区二区 | 91精品久久久久久久久 | 99久久久国产精品 | 久久精品视频免费 | 欧美久久久一区二区三区 | 午夜12点 | 国产人在线成免费视频 | 亚洲夜夜网 | 日韩在线视频不卡 | 亚洲人毛片 | 美女黄濒 | 亚洲精品国偷拍自产在线观看蜜桃 | 91麻豆精品国产午夜天堂 | 日韩亚洲国产精品 | 国内精品久久久久久久97牛牛 | 国产亚州av | 激情喷水| 国产原厂视频在线观看 | 欧洲精品一区二区 | 成人黄色电影在线观看 | 日韩电影久久 | 亚洲久草在线视频 | 丁香婷婷激情 | 中文字幕国产一区二区 | 一区二区视频在线看 | 成年人视频在线观看免费 | 日本中文一区二区 | 91亚色免费视频 | 久久艹在线 | 久久99国产精品二区护士 | 欧美国产三区 | av电影一区二区三区 | 婷婷综合亚洲 | 超碰人人超 | 国产精品美女久久久久久久 | 久草视频在线观 | 日韩在线视频网 | 成人一级电影在线观看 | 日韩大片在线观看 | 国产亚洲精品日韩在线tv黄 | 十八岁免进欧美 | 91麻豆看国产在线紧急地址 | 狠狠色噜噜狠狠狠狠 | 天天玩天天干 | 中文字幕电影高清在线观看 | 日韩黄色大片在线观看 | 日本中文一级片 | 黄色电影网站在线观看 | 在线视频一区二区 | 精品久久精品 | 一区二区三区免费在线 | 日韩av在线小说 | 午夜久久福利视频 | 亚洲精品在线国产 | 新版资源中文在线观看 | 一级欧美日韩 | 免费看一及片 | 日韩三级视频在线观看 | 天天干亚洲 | 福利视频网站 | 久草网站 | 狠狠色伊人亚洲综合网站野外 | www.com黄| 欧美成人精品三级在线观看播放 | 国产免费一区二区三区最新 | 狠狠操狠狠操 | 国产永久免费高清在线观看视频 | 在线观看日本高清mv视频 | 免费a级黄色毛片 | 亚洲成人精品久久久 | 久久看免费视频 | 一级片免费观看 | 黄色avwww| 色窝资源| 黄色一级在线免费观看 | 久久精品国产99 | av久久久 | 免费看v片 | 免费视频一级片 | 久久久精品高清 | 九色91福利| 国产黄a三级三级三级三级三级 | 久久伊人八月婷婷综合激情 | 天天操天天色天天射 | 欧美日韩三级在线观看 | 色五月成人 | 在线视频观看亚洲 | 日韩欧美综合在线视频 | 婷婷色网站 | 欧美黑人性猛交 | 亚洲免费在线视频 | 91九色在线 | 91视频最新网址 | 久久综合国产伦精品免费 | 国产69精品久久久久99尤 | 国产精品人人做人人爽人人添 | 中文字幕精品一区 | 91理论电影 | 在线观看免费黄色 | 在线免费高清一区二区三区 | 成人一区二区三区中文字幕 | 国产精品亚洲片在线播放 | 国产99亚洲 | 人人超碰免费 | 日韩欧美精品在线视频 | 天天综合导航 | 国产精品国产精品 | 香蕉在线观看视频 | 韩国av一区二区三区 | 激情大尺度视频 | 人人舔人人爽 | 国产经典 欧美精品 | 国产精品久久久久9999吃药 | 日韩免费视频线观看 | 国产不卡av在线播放 | 网站在线观看日韩 | 在线精品一区二区 | 国产精品一区二区久久久 | 最新色站 | 在线免费观看视频一区二区三区 | 最近高清中文在线字幕在线观看 | 国产精品不卡av | 99在线热播精品免费99热 | 久久久久欧美精品999 | 91伊人| 日韩在线国产 | 欧美特一级 | 在线观看免费成人av | 色橹橹欧美在线观看视频高清 | 中文有码在线视频 | 免费在线国产精品 | 久青草视频在线观看 | 一区 二区电影免费在线观看 | 国产精品久久久 | 久久精品久久久久电影 | 久久一区二区三区日韩 | 在线导航福利 | 中文字幕国产精品一区二区 | 色av资源网 | www.夜夜爽| 国产 中文 日韩 欧美 | 欧美一级视频一区 | 天天插一插 | 成人免费看视频 | 午夜久久久久久久 | 久久精品欧美日韩精品 | 国产剧情亚洲 | bayu135国产精品视频 | 日韩午夜在线观看 | 日日夜夜天天 | 人人看97 | 亚洲.www| 在线观看免费成人av | 欧美日韩精品在线免费观看 | 国产成人av电影在线 | 国产精品久久久久久久久久尿 | 91九色蝌蚪国产 | 91传媒视频在线观看 | 美州a亚洲一视本频v色道 | 四虎免费在线观看 | 91免费日韩| 免费av网址大全 | 欧美另类视频 | 狠狠色噜噜狠狠 | 黄色综合| 国产黑丝一区二区三区 | 久久人人添人人爽添人人88v | 久久视频在线看 | 国产精品久久久久久久久久久久冷 | 国产手机在线视频 | 成人一级免费电影 | 国产黄a三级三级三级三级三级 | 亚洲综合射 | 亚洲精品在线视频 | 日韩精品一区二区三区电影 | 成人国产在线 | 天天射天天艹 | 国产黄色美女 | 中文字幕 国产视频 | 国产精品1区2区 | 91片在线观看 | 不卡精品视频 | 亚一亚二国产专区 | 日韩欧美在线影院 | 国产精品av电影 | 久久国色夜色精品国产 | 日韩在线视频线视频免费网站 | 美女久久久久久久久久 | 国产无限资源在线观看 | 91视频首页| 欧美色噜噜 | 久久综合精品国产一区二区三区 | 久久成人资源 | 91麻豆视频网站 | 看污网站 | 欧美福利片在线观看 | 欧美成人在线免费观看 | 日精品 | 天堂在线视频中文网 | 美女视频黄免费的 | 美女久久精品 | 成人午夜电影网 | 激情婷婷六月 | 天天插天天干天天操 | 国产精品成人一区二区三区吃奶 | 日本不卡123 | 97精品欧美91久久久久久 | 久久婷婷国产 | 久久久久国产成人免费精品免费 | 久久福利精品 | 色综合久久中文综合久久牛 | 久久久久久久免费 | 成人app在线免费观看 | 亚洲乱码精品久久久久 | 丝袜av网站 | 亚洲涩涩网| 国产免费黄视频在线观看 | 天天操天天干天天 | 91免费的视频在线播放 | 午夜av在线播放 | 国产精品99久久久久久久久久久久 | 国产99久久久精品 | 五月亚洲综合 | 六月丁香激情综合 | 色吊丝在线永久观看最新版本 | 波多野结衣综合网 | 亚洲一区精品人人爽人人躁 | 91av99| 亚洲精品国产高清 | 狠狠色丁香婷婷综合 | 高清精品视频 | 国产香蕉97碰碰碰视频在线观看 | 色综合激情网 | 天天操天天弄 | 国产成人精品久久亚洲高清不卡 | 久久久久久高潮国产精品视 | 91麻豆精品91久久久久同性 | 麻豆视频在线免费观看 | 香蕉成人在线视频 | 成人免费毛片aaaaaa片 | 激情视频一区二区三区 | 玖玖精品在线 | 成人免费观看大片 | 国产福利中文字幕 | 免费看日韩 | av色影院| 91免费黄视频 | 欧美激情综合色 | 国产人在线成免费视频 | 丁香视频全集免费观看 | 91麻豆精品国产91久久久久 | 免费日韩 精品中文字幕视频在线 | 天天人人综合 | 在线视频 国产 日韩 | 西西人体www444 | 中文在线8资源库 | 黄色视屏免费在线观看 | 91精品在线观看视频 | av大片免费 | 亚洲免费观看在线视频 | 少妇超碰在线 | 色视频在线免费观看 | 精品国产一区二区三区免费 | 日韩影视精品 | 在线视频99 | 久久看视频 | 久久综合精品国产一区二区三区 | 亚洲精品在线观看中文字幕 | 日韩在线精品 | 黄色片免费看 | 国产精品不卡在线观看 | 国产成人精品久久久久 | 日本三级不卡视频 | 免费网站在线观看人 | 天天干天天碰 | 在线看黄色的网站 | 视频一区二区精品 | 国产精品久久久久永久免费 | 日韩中文字幕91 | 精品国偷自产国产一区 | 国产午夜精品久久久久久久久久 | 亚洲日本黄色 | 日韩欧美精品一区二区三区经典 | 黄视频网站大全 | 欧美视屏一区二区 | 精品成人免费 | 久久精品国产免费观看 | 国产又粗又猛又黄视频 | 亚洲精品国产精品国自产观看浪潮 | 国产精品99久久久久久大便 | 天天射天天干天天爽 | 欧美做受xxx | 乱男乱女www7788 | 一级片视频在线 | 日韩手机视频 | 欧美日韩精品在线观看视频 | 色婷婷丁香 | 国产福利在线不卡 | 色噜噜狠狠狠狠色综合 | 97天天干| 久久国产免 | 九草在线视频 | 九九精品视频在线观看 | 国产伦精品一区二区三区在线 | 看片一区二区三区 | 麻豆视频91| 欧洲精品视频一区二区 | 黄网站免费大全入口 | 丝袜美女在线 | 97av在线视频| 人人看人人草 | 婷婷在线播放 | 亚洲一区二区麻豆 | 精品色综合 | 少妇bbbb揉bbbb日本 | 久热免费在线观看 | 夜夜澡人模人人添人人看 | 久久国产精品色婷婷 | 亚洲综合在线五月 | 久久国产精品久久久 | 人人爽人人做 | 丁香影院在线 | 国产在线看一区 | 一级特黄av | 国产专区精品 | 国产无吗一区二区三区在线欢 | 91重口视频 | 在线观看成人国产 | 国产日韩欧美视频在线观看 | 69视频国产 | 国语精品免费视频 | 男女激情片在线观看 | 丰满少妇在线观看资源站 | 成年人免费在线观看网站 | 国产精品久久久精品 | 国产成人专区 | 久久av高清 | 久久黄色网页 | 日韩中文字幕免费视频 | 少妇18xxxx性xxxx片 | 成人羞羞视频在线观看免费 | 欧美久久久久久久久久久久久 | 欧美日韩国产综合网 | 亚洲精品免费播放 | 久久超级碰 | 天天夜夜亚洲 | 久久精品国产亚洲aⅴ | 亚洲 成人 欧美 | 九九国产精品视频 | 91精品在线观看视频 | 久热这里有精品 | 免费进去里的视频 | 成人av影院在线观看 | 欧美成人区 | 日本精品久久久一区二区三区 | 在线免费成人 | 99在线视频观看 | 日韩久久久 | 成人免费网站在线观看 | 免费看的黄色录像 | 一区二区视频在线播放 | 国产视频18 | 天天操天天干天天操天天干 | 国产韩国日本高清视频 | 伊人av综合 | 亚洲永久精品一区 | 日日夜夜综合网 | 97免费视频在线 | 亚洲精品美女久久 | 国产精品久久久视频 | 超碰免费av| 亚洲人成网站精品片在线观看 | 九九九九热精品免费视频点播观看 | 综合网色 | 日韩一区二区三区视频在线 | 91精品国自产在线观看 | 黄色aaa级片 | 日韩精品一区二区三区三炮视频 | 99精品视频精品精品视频 | 国产亚洲人成网站在线观看 | 日韩日韩日韩日韩 | 国产亚洲视频系列 | 91精品国自产在线偷拍蜜桃 | 成人一级影视 | 男女拍拍免费视频 | 欧美在线视频一区二区三区 | 欧美精品一二三 | 丰满少妇高潮在线观看 | 国产 日韩 欧美 中文 在线播放 | 91经典在线 | 久久久精品| 久久久官网 | 人人盈棋牌| 91成品人影院 | 日日天天干| 国产精品亚洲成人 | 日韩三级精品 | 精品久操 | 国产精品久久久久久麻豆一区 | 欧美日韩一区二区视频在线观看 | 最近乱久中文字幕 | 人成午夜视频 | 狠狠色免费| 99精品免费| 国产高清视频免费观看 | 久精品视频免费观看2 | 欧美性生爱 | 国产视频1区2区3区 久久夜视频 | 高清av免费看 | 六月丁香激情综合色啪小说 | 国产精品久久久999 国产91九色视频 | 国产高清免费 | 国产91在线 | 美洲 | 在线看成人 | 国产123区在线观看 国产精品麻豆91 | 成人av资源网 | 97人人爽人人 | 日韩大片免费在线观看 | 欧美a√大片 | 黄色网址国产 | 国产尤物在线观看 | 日韩欧美综合在线视频 | 亚洲女人天堂成人av在线 | 亚洲人人精品 | 2024av| 国产专区视频 | 日韩精品专区 | 久草在线视频首页 | 激情av综合| 国产亚洲精品久久久久久大师 | 特级西西444www大胆高清无视频 | 高清不卡免费视频 | 亚洲成a人片77777kkkk1在线观看 | 亚洲精品久久久久久久不卡四虎 | 精品久久久久久久久久岛国gif | 国产成人久久精品77777 | 久久伊人八月婷婷综合激情 | 日韩乱码中文字幕 | 日本精品视频在线 | 黄色大片视频网站 | 97视频在线观看成人 | 中文字幕视频网站 | 男女视频国产 | 免费a级毛片在线看 | 久久伊人国产精品 | 国产精品18久久久久vr手机版特色 | 国产a视频免费观看 | 在线91网| 91在线播放国产 | 黄色成人影视 | 2020天天干夜夜爽 | 色偷偷888欧美精品久久久 | 久久精品波多野结衣 | 亚洲高清在线精品 | 天天添夜夜操 | www国产精品com | 麻豆视频91 | 欧美日韩视频免费 | 女人18片毛片90分钟 | 91原创在线观看 | 国产美女在线免费观看 | 久久99亚洲精品久久久久 | 中文乱幕日产无线码1区 | 九九久久电影 | 日本黄色免费观看 | 亚洲一区不卡视频 | 午夜美女wwww | 成人小电影在线看 | 精品国产亚洲一区二区麻豆 | 夜夜婷婷 | 92中文资源在线 | 六月丁香伊人 | 免费成人在线视频网站 | 久久综合久久八八 | 久久9视频 | 久久精品日产第一区二区三区乱码 | 丁香激情综合国产 | 日韩av进入 | 亚洲美女在线一区 | 国产视频在线一区二区 | 国产亚洲视频系列 | 精品亚洲欧美无人区乱码 | 国产中文字幕在线视频 | 国产成人精品一区二区三区 | 久久激情视频 久久 | 999久久国产 | 福利视频区| 成人在线观看你懂的 | 日韩在线精品视频 | 久久久免费高清视频 | 色婷婷成人 | wwwwww国产 | 亚洲人成人天堂h久久 | 狠狠色香婷婷久久亚洲精品 | 欧美日韩国产高清视频 | 99久久久久久久久久 | 99在线精品观看 | 久久免费精品视频 | 91最新中文字幕 | 日韩网站在线播放 | 免费美女av| 久久久久电影网站 | 久久久精品亚洲 | 久久久久国产精品免费 | 亚洲成人av在线播放 | 欧美精品少妇xxxxx喷水 | av电影不卡 | 成人毛片一区 | 美女视频黄是免费的 | 91爱爱视频 | 伊人天天色 | 久久精品视频18 | 青春草视频| 99看视频在线观看 | 欧美男男激情videos | 中文字幕91视频 | 国产成人久久久久 | 超碰成人网 | 欧美做受高潮1 | av片一区二区 | 日韩一区二区免费视频 | 一二三区在线 | 午夜精品久久 | 日韩成人免费在线观看 | 免费在线观看午夜视频 | www.eeuss影院av撸 | 日日夜夜天天综合 | 久久99影院| 天天看天天操 | 国产精品激情在线观看 | 狠狠干狠狠插 | 亚洲欧美视频在线播放 | 国产一区视频在线观看免费 | 久久免费资源 | 天天草网站 | 久久久久久久久久免费 | 日韩手机在线 | 99精品成人 | 久久免费在线观看 | 少妇资源站 | av黄色亚洲| 欧美作爱视频 | 国产成人精品亚洲精品 | 午夜精品一区二区三区免费视频 | 久久久久免费精品视频 | 久久精品久久精品久久39 | 天天爽人人爽 | 久久久不卡影院 | 看片的网址 | 国产一区电影在线观看 | 少妇自拍av| 日日操网| 天天色天天射天天干 | 国产黄色片一级三级 | 国产一线二线三线在线观看 | 草久视频在线观看 | 最近中文字幕免费观看 | 99 视频 高清 | 久久五月精品 | 色九九视频 | 日韩av一区二区三区 | 久久久国产精品视频 | 日本激情视频中文字幕 | 日本一区二区三区视频在线播放 | 免费黄a大片| 欧美地下肉体性派对 | 日本中文字幕电影在线免费观看 | 91超碰免费在线 | 91视频这里只有精品 | 精品一区三区 | www.五月天婷婷 | 欧美日韩精品在线视频 | 国产在线日本 | 久热av| 成人久久精品 | 国产99久久久国产精品免费看 | 国产一级片久久 | 天天激情综合 | 黄色成品视频 | 在线播放日韩av | 亚洲日b视频 | 久久精品免视看 | 国产美女网站在线观看 | av中文字幕免费在线观看 | 超碰在线97观看 | 日韩免费小视频 | 黄色成人免费电影 | 爱爱av在线 | 超碰在线日本 | 国产一区欧美日韩 | 亚洲精品成人av在线 | 国产精品第一视频 | 亚洲欧美日韩精品一区二区 | 国产精品密入口果冻 | 亚洲日日日 | 五月婷婷深开心 | 免费看高清毛片 | 日韩一级精品 | 精品国产成人 | 中文字幕在线播放一区二区 | 国产三级香港三韩国三级 | 国产中文字幕一区二区 | 午夜视频在线观看欧美 | 国产91精品一区二区绿帽 | 狠狠色丁香久久婷婷综合五月 | 久久久久久久免费观看 | 99热在线看 | 亚洲精品美女 |