

The signal is just the presence of a certain file on the home computer. if you have bought servers manually, or grown your home server's RAM). You can also run a small script called "signal.script" to manually reschedule (e.g.

The signal can come from either the spider, when it gains enough strength to hack a new server, or the watcher, when a server has been sufficiently weakened (or, soon, grown). My current stance is to use one weaken worker per six flexihack workers, which seems to be about as much as is required to keep up with either six grow threads or six hack threads. We also schedule a small group of flexihack workers on any totally weakened target, even if it's not the most optimal one, so that we can at least have some income from hacking early on. I found the task of balancing grow and hack calls tedious, so my flexihack worker calls grow and hack adaptively (when the available money drops below 95% of max, grow is called). We try to schedule these backwards in the targets list, focusing on the highest growth servers first. The second priority is to schedule "flexihack" workers. It also spawns a small watcher script to notify the distributor when a node like this has been weakened down to the minimum level.Ĭurrently, I only do this preparation step for security level, but I should probably also grow servers before beginning to hack them. If it encounters any that are significanly more secure than their minimum security level, it will dedicate as many threads as possible among all the hosts to weakening that server. It iterates through the targets in the order the spider observed them (i.e. The first priority is to focus on weakening the weakest pending node. The new worker scheduling algorithm currently has two basic priorities. We'll be able to spend more time thinking about algorithmic improvements if we don't have to do fiddly things like managing state. Cancelling all our existing workers has some minor drawbacks in terms of performance, but what it wins us in simplicity dominates such considerations.


Bitburner hacking mission code#
Netscripts programming capabilities are some of the most challenging and inconsistent I've ever worked with, so I want to write as little complex code as possible. We cancel all existing workers because it is easier to solve this problem if you don't have to keep track of state.
