Skip to content
This repository has been archived by the owner on Feb 27, 2023. It is now read-only.

Proposal: a raw method to realize dynamical rate limiting #1427

Open
zhaxzhax opened this issue Jul 13, 2020 · 4 comments · May be fixed by #1455
Open

Proposal: a raw method to realize dynamical rate limiting #1427

zhaxzhax opened this issue Jul 13, 2020 · 4 comments · May be fixed by #1455
Labels
kind/proposal any proposals for project

Comments

@zhaxzhax
Copy link
Contributor

A raw method to realize dynamical rate limiting

It's just the raw version of my future implementation, maybe rough, but could still be optimizing later.

My implementation plan

  1. Add {weight、rateLimiter} in PeerMgr struct
  2. Design a schedule algorithm that will optimize the process that one PeerNode download from a high weight level PeerNode, also maintains weight in PeerMgr struct. (As the picture below)
    image
  3. Write a program to listen host total bandwidth(HTB Listener)
  4. Design a algorithm dynamically adjust the host rateLimiter, and send it to SuperNodePeer.(The flow chart and the algorithm are as below)
    image
    image
  5. Add cli option to determine whether to open dynamically rateLimiter.
  6. Benchmark and optimization.

Additional information

  • There is ProducerLoad in progress_manager, may be we should using ProducerLoad instead of weight and add rateLimiter in each progress_manager.
  • We could use gopacket to listen host local bandwidth.
  • The dynamical rate algorithem may be too rough, TCP Congestion Control Algorithm could be a better method to imitate.
@pouchrobot pouchrobot added the kind/proposal any proposals for project label Jul 13, 2020
@zhaxzhax zhaxzhax changed the title Proposal: a raw method to realize dynamical rate limiting #1344 #1357 Proposal: a raw method to realize dynamical rate limiting Jul 13, 2020
@zhaxzhax
Copy link
Contributor Author

This proposal is related to #1344 and #1357.

@zhaxzhax
Copy link
Contributor Author

zhaxzhax commented Jul 23, 2020

image
It seems like in Dragonfly's schedule algorithm, Dragonfly only sort pieces to ensure the pieces slice by file could be distributed to a lot of peers, and to avoid the situation all peers download one piece (Nervous resources).
But if one peer decides to download piece 3, cause piece 0,1,2 have bigger PieceCount than piece3. This peer have no idea about downloading from supernode, piece0, piece1 or piece2.
I am aimed to add a DynamicRate field to sort the peerIDs. The string array peerIDs is going to be sorted by DynamicRate/ProducerLoad.

  • the raw code of sort
func (sm *Manager) getPeerPriorityMap(ctx context.Context, peerIDs []string) (map[string]int) {
	peerPriorityMap := make(map[string]int)
	for i := 0; i < len(peerIDs); i++ {
		// NOTE: should we return errors here or just record an error log?
		// if failed to get peerState, and then it priority need to be the lowest.
		peerState, err := sm.progressMgr.GetPeerStateByPeerID(ctx, peerIDs[i])
		if err != nil {
			peerPriorityMap[peerIDs[i]] = 0
		}else{
			peerPriorityMap[peerIDs[i]] = int(peerState.DynamicRate/int64(peerState.ProducerLoad.Get()))
		}
	}
	return peerPriorityMap
}

// sortPeerExecutor sorts the peers by DynamicRate/ProducerLoad
func (sm *Manager) sortPeerExecutor(ctx context.Context, peerIDs []string, peerPriorityMap map[string]int) {
	if len(peerIDs) == 0 || len(peerPriorityMap) == 0 {
		return
	}
	sort.Slice(peerIDs, func(i, j int) bool {
		// sort by distributedCount to ensure that
		// the least distributed pieces in the network are prioritized
		return peerPriorityMap[peerIDs[i]]>peerPriorityMap[peerIDs[j]]
	})	 
}

  • Add two function call in function tryGetPID
// sort the peerIDs by DynamicRate/ProducerLoad
	peerPriorityMap:=sm.getPeerPriorityMap(ctx,peerIDs)
	sm.sortPeerExecutor(ctx, peerIDs, peerPriorityMap)
  • Add field DynamicRate type PeerState
type PeerState struct {
	// PeerID identifies a peer uniquely.
	PeerID string

	// ProducerLoad is the load of download services provided by the current node.
	ProducerLoad *atomiccount.AtomicInt

	// ClientErrorCount maintains the number of times that PeerID failed to downloaded from the other peer nodes.
	ClientErrorCount *atomiccount.AtomicInt

	// ServiceErrorCount maintains the number of times that the other peer nodes failed to downloaded from the PeerID.
	ServiceErrorCount *atomiccount.AtomicInt

	// ServiceDownTime the down time of the peer service.
	ServiceDownTime int64
	
	// DynamicRate is the dynimical download rate limit of the peer node
	DynamicRate int64
}

@zhaxzhax
Copy link
Contributor Author

zhaxzhax commented Aug 7, 2020

Final design and implementation scheme

The design before this comment may be deprecated, the final implementation scheme is based on this comment

cli add

--dynamic which could allow you to open the dynamic bandwidth limit mode.

api add

Add two apis, which used to send bandwidth limit update information from peerserver to supernode.

  • /api/v1/peer/{id}/peerState/{dynamicRate} The 1.0 version api.
  • /peer/dynamicrate The 0.3 version api.

field add

  • Add field DynamicRate in struct PeerState

implement scheme

  • host_bandwidth_listener listen to the host bandwidth, add set the 90% of current available bandwidth
    as totalLimit ,also as dynamicRate be sent to supernode
  • supernode use dynamicRate and ProducerLoad to calculate the priority of peer, the schedule algorithm will schedule the
    peer with higher priority first.

part which haven't implement now

  • host_bandwidth_listener

@zhaxzhax zhaxzhax linked a pull request Aug 7, 2020 that will close this issue
@zhaxzhax
Copy link
Contributor Author

The final htb-Listener implement is using this formulation (maxRate - currentRate) * 9 / 10 which means the 90% of the usable bandwidth.

  • Because the server bandwidth is limited mainly in the upstream bandwidth, so I only dynamically change the totalLimit which could only limit the server upload rate.
  • The dynamic rate change every 10 seconds, and will report it to supernode in case supernode can schedule peer download task by priority(dynamicRate/producerLoad).
  • I have removed portRate cause Dragonfly can not import too much independents such as gopacket or libpcap.
  • I have removed the maxRate detective scheme, which is replaced by user can configure config file to ensure the DynamicMaxRate. That's due to bandwidth velocity measurement could be resource wasteful. And config DynamicMaxRate by user self could be avoiding resource waste.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/proposal any proposals for project
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants