pypette 一个让管道构建变得简单的模块,允许用户以最少的指令来控制流量
pypette 一个让管道构建变得简单的模块,允许用户以最少的指令来控制流量
Node.js 其它杂项
共255Star
详细介绍
pypette (to be read as pipette) is a module which makes building pipelines ridiculously simple, allowing users to control the flow with minimal instructions.
Features
- Ridiculously simple interface.
- Ability to view pipeline structure within the comfort of a terminal.
- Run pipeline in exception resilient way if needed.
- Create dependencies on pipelines easily.
- Generate a easy to view/understand report within the comfort of a terminal.
Setup
Using pip
pip install pypette
Directly from the repository
git clone https://github.com/csurfer/pypette.git
python pypette/setup.py install
Documentation
Detailed documentation can be found at https://csurfer.github.io/pypette
Structures
Job
The basic unit of execution, say a python method or a callable.
from pypette import Job
def print_hello():
print("Hello!")
def print_hello_msg(msg):
print("Hello " + msg + "!")
# Job without arguments
j1 = Job(print_hello)
# Job with arguments specified as argument list
j2 = Job(print_hello_msg, args=("pypette is simple",))
# Job with arguments specified as key word arguments
j3 = Job(print_hello_msg, kwargs={"msg":"pypette is simple"})
BashJob
The basic unit of execution, which runs a bash command.
from pypette import BashJob
# Job with bash commands
b1 = BashJob(['ls', '-l'])
b2 = BashJob(['pwd'])
Pipe
Structure to specify the flow in which the jobs need to be executed. The whole interface consists of only 4 methods.
from pypette import Pipe
# 1. Create a new Pipe
p = Pipe('TestPipe')
# 2. Add jobs to execute. (Assuming job_list is a list of python/bash jobs)
# To run the jobs in job_list in order one after the other where each job waits
# for the job before it to finish.
p.add_jobs(job_list)
# To run the jobs in job_list parallelly and run the next step only after all
# jobs in job list finish.
p.add_jobs(job_list, run_in_parallel=True)
# Add jobs in a builder format.
p.add_stage(job1).add_stage(job2) # To add jobs in series.
p.add_stage(job1, job2) # To add jobs in parallel.
Building complex pipelines
Jobs submitted to pipeline should be callables i.e. structures which can be run. This means python methods, lambdas etc qualify.
What about Pipe itself?
Of course, it is a callable and you can submit a pipe object to be run along with regular jobs. This way you can build small pipelines which achieve a specific task and then combine them to create more complex pipelines.
from pypette import BashJob, Job, Pipe
def welcome():
print("Welcome user!")
def havefun():
print("Have fun!")
def goodbye():
print("Goodbye!")
# Build a simple pipeline
p1 = Pipe('Fun')
p1.add_jobs([
Job(havefun),
])
# Include simple pipeline into a complicated pipeline
p2 = Pipe('Overall')
p2.add_jobs([
Job(welcome),
p1,
Job(goodbye),
BashJob(['ls', '-l']),
BashJob(['pwd'])
])
p2.run() # This first runs welcome, then runs p1 pipeline then runs goodbye.
Example pipeline
An example pipeline and its code are included in examples folder.
Visualizing the pipeline using graph()
Pipeline objects have a method called graph()
which helps visualize the pipeline within the comfort of your terminal. The graph is recursive in nature and it visualizes everything that will be run if we call run()
on the pipe object.
Visualizing the top-level pipeline in examples/basic.py led to the following visualization.
Running the entire pipeline.
The only thing you need to do at this point to run the entire pipeline is to call run()
on your pipeline object.
Reporting the entire pipeline.
The only thing you need to do at this point to get the report of entire pipeline is to call report()
on your pipeline object.
Contributing
Bug Reports and Feature Requests
Please use issue tracker for reporting bugs or feature requests.
Development
Pull requests are most welcome.
Buy the developer a cup of coffee!
If you found the utility helpful you can buy me a cup of coffee using
-
230 Star
-
28 Star
-
11 Star
-
172 Star
-
903 Star
-
1386 Star
-
4425 Star