Study Log (2021.03)

8 minute read

2021-03-23

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • Step별 복수 설비 진행 가능한 케이스 개발

2021-03-22

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • Step별 복수 설비 진행 가능한 케이스 개발

2021-03-21

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • Step별 복수 설비 진행 가능한 케이스 개발

2021-03-21

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • Step별 복수 설비 진행 가능한 케이스 개발

2021-03-20

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • Plotly Gantt Chart 추가

2021-03-18

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • Step별 복수 설비 진행 가능한 케이스 개발

2021-03-17

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • SBJSSP_report_results.ipynb
      • def get_swapping_ops(blocking_op, machine_dict)
      • class blMachine(Machine)
      • class blMachineManager(MachineManager)
      • class blSimulator(Simulator)
      • def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
      • def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
      • def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
      • def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
      • def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
      • def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
      • def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
      • def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
      • def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)

2021-03-16

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • SBJSSP_report_results.ipynb
      • def get_swapping_ops(blocking_op, machine_dict)
      • class blMachine(Machine)
      • class blMachineManager(MachineManager)
      • class blSimulator(Simulator)
      • def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
      • def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
      • def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
      • def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
      • def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
      • def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
      • def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
      • def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
      • def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)

2021-03-15

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • SBJSSP_report_results.ipynb
      • def get_swapping_ops(blocking_op, machine_dict)
      • class blMachine(Machine)
      • class blMachineManager(MachineManager)
      • class blSimulator(Simulator)
      • def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
      • def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
      • def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
      • def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
      • def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
      • def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
      • def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
      • def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
      • def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)

2021-03-12

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • SBJSSP_report_results.ipynb
      • def get_swapping_ops(blocking_op, machine_dict)
      • class blMachine(Machine)
      • class blMachineManager(MachineManager)
      • class blSimulator(Simulator)
      • def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
      • def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
      • def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
      • def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
      • def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
      • def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
      • def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
      • def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
      • def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)

2021-03-11

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • SBJSSP_report_results.ipynb
      • def get_swapping_ops(blocking_op, machine_dict)
      • class blMachine(Machine)
      • class blMachineManager(MachineManager)
      • class blSimulator(Simulator)
      • def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
      • def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
      • def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
      • def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
      • def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
      • def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
      • def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
      • def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
      • def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)

2021-03-10

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • SBJSSP_report_results.ipynb
      • def get_swapping_ops(blocking_op, machine_dict)
      • class blMachine(Machine)
      • class blMachineManager(MachineManager)
      • class blSimulator(Simulator)
      • def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
      • def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
      • def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
      • def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
      • def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
      • def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
      • def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
      • def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
      • def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)

2021-03-09

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • SBJSSP_report_results.ipynb
      • def get_swapping_ops(blocking_op, machine_dict)
      • class blMachine(Machine)
      • class blMachineManager(MachineManager)
      • class blSimulator(Simulator)
      • def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
      • def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
      • def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
      • def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
      • def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
      • def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
      • def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
      • def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
      • def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)

2021-03-08

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • SBJSSP_report_results.ipynb
      • def get_swapping_ops(blocking_op, machine_dict)
      • class blMachine(Machine)
      • class blMachineManager(MachineManager)
      • class blSimulator(Simulator)
      • def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
      • def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
      • def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
      • def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
      • def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
      • def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
      • def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
      • def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
      • def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)

2021-03-07

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • SBJSSP_report_results.ipynb
      • def get_swapping_ops(blocking_op, machine_dict)
      • class blMachine(Machine)
      • class blMachineManager(MachineManager)
      • class blSimulator(Simulator)
      • def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
      • def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
      • def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
      • def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
      • def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
      • def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
      • def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
      • def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
      • def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)

2021-03-06

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • SBJSSP_report_results.ipynb
      • def get_swapping_ops(blocking_op, machine_dict)
      • class blMachine(Machine)
      • class blMachineManager(MachineManager)
      • class blSimulator(Simulator)
      • def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
      • def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
      • def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
      • def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
      • def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
      • def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
      • def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
      • def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
      • def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)

2021-03-05

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • SBJSSP_report_results.ipynb
      • def get_swapping_ops(blocking_op, machine_dict)
      • class blMachine(Machine)
      • class blMachineManager(MachineManager)
      • class blSimulator(Simulator)
      • def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
      • def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
      • def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
      • def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
      • def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
      • def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
      • def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
      • def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
      • def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)

2021-03-04

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • SBJSSP_report_results.ipynb
      • def get_swapping_ops(blocking_op, machine_dict)
      • class blMachine(Machine)
      • class blMachineManager(MachineManager)
      • class blSimulator(Simulator)
      • def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
      • def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
      • def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
      • def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
      • def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
      • def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
      • def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
      • def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
      • def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)

2021-03-03

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • SBJSSP_report_results.ipynb
      • def get_swapping_ops(blocking_op, machine_dict)
      • class blMachine(Machine)
      • class blMachineManager(MachineManager)
      • class blSimulator(Simulator)
      • def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
      • def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
      • def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
      • def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
      • def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
      • def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
      • def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
      • def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
      • def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)

2021-03-02

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • SBJSSP_report_results.ipynb
      • def get_swapping_ops(blocking_op, machine_dict)
      • class blMachine(Machine)
      • class blMachineManager(MachineManager)
      • class blSimulator(Simulator)
      • def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
      • def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
      • def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
      • def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
      • def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
      • def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
      • def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
      • def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
      • def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)

2021-03-01

  • S-K RL
    • train_FT10_ppo_node_only.py
      • do_simulate_on_aggregated_state()
      • value_loss, action_loss, dist_entropy = agent.fit(eval=0, reward_setting=’utilization’, device=device, return_scaled=False)
      • eval_performance = evaluate_agent_on_aggregated_state(simulator=sim, agent=agent, device=’cpu’, mode=’node_mode’)
      • val_performance = validation(agent, path, mode=’node_mode’)
    • SBJSSP_report_results.ipynb
      • def get_swapping_ops(blocking_op, machine_dict)
      • class blMachine(Machine)
      • class blMachineManager(MachineManager)
      • class blSimulator(Simulator)
      • def evaluate_agent_on_aggregated_state(simulator, agent, device, mode=’edge_mode’)
      • def evaluate_agent_on_aggregated_state_DR(simulator, mode=’MTWR’)
      • def SBJSSP_validation(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None)
      • def compare_with_optimum(makespans, files, plot=False, scheduler_name=None)
      • def evaluate_agent_on_aggregated_state_DR_interrupted(simulator, mode=’MTWR’, shutdown_prob=0.2)
      • def evaluate_agent_on_aggregated_state_interrupted(simulator, agent, device, mode=’edge_mode’, shutdown_prob=0.2)
      • def SBJSSP_validation_interrupted(agent, path, device=’cpu’, optimums=None, num_val=100, new_attr=False, mode=’edge_mode’, special=None, DR=None, shutdown_prob=0.2)
      • def random_simulator(min_m=5, max_m=10, max_job=10, new_attr=False, special=’SBJSSP’):
      • def do_simulate_on_aggregated_state_interrupted(simulator, agent, episode_index, device, reward=’utilization’, scaled=False, mode=’edge_mode’,shutdown_prob=0.2)

Template

Updated:

Comments