Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Replace bh-packages with github queries #1

Open
eraserhd opened this issue Jan 23, 2013 · 18 comments
Open

Proposal: Replace bh-packages with github queries #1

eraserhd opened this issue Jan 23, 2013 · 18 comments

Comments

@eraserhd
Copy link
Member

In order to ease maintenance and open up black hole for use by the rest of the world, I propose the following:

Versions

bh search will deduce the versions available from the repository by searching for tags which match "v[0-9]+(?:.[0-9]+)*".

Even though 'bh search' won't display the following, they can be used for installing various versions:

  • HEAD refers to the tip of the primary branch.
  • A branch may be named as a version to refer to the tip of that branch.
  • A tag may be named as a version to refer to the code at that tag.

Repositories

A default install will have one repository: gambit-community. The following commands could be added later.

  • bh add-respository GITHUB-URL
  • bh list-repositories
  • bh remove-repository GITHUB-URL

bh search and bh install will search through the repository list in order.

Raw URLs

bh install would be modified so that if the argument is not a package name, but a git URL, the package is installed directly from the git URL and it does not need to be

@m-i-k-a-e-l
Copy link

Spontaneous response: Great!

Some reflection: I believe that while "bh search queries github for repositories under the gambit-community organization which start with gambit-." does a very good job as primary method for downloading the packages index,

I believe we need a secondary route too for "external packages". What about in "gambit-community" we create a repository "bh-external-packages", in which each file is an identifier of a package at a custom GIT or HTTP URL?

(Indeed removing https://raw.github.com/pereckerdal/bh-packages/master/pkglist makes sense as it's not maintained.)

Is there a ready GitHUB API/HTTP URL for downloading the list of repos?

I hope so, that may be needed to keep this well working.

"A default install will have one repository: gambit-community"

Wait, it's not repositories you talk about in this section but repository indexes of a Github organization account.

What about calling this 'package source'?

Yes, having the gambit-community repositories list as default package source makes great sense.

So then for bh commands, rather than

bh add-respository GITHUB-URL
bh list-repositories
bh remove-repository GITHUB-URL

would be like, (Disclaimer: I didn't check BH's current packages functionality well, please correct me if there's something smarter than thsi already)

bh add-pkgsource [pkgsourcedef]
bh remove-pkgsource [pkgsourcedef]
bh list-pkgsources
bh list-packages [optional: pkgsourcedef]

pkgsourcedef definition:

a GIT or HTTP URL = a pkgsourcedef for one single package

pkglist:[http url] = list of packages in some particular text format we define

github-org:[package prefixes]:[identifier for github's api for listing repos] = a pkgsourcedef consisting of the repos contained in that github organization.

so the default preinstalled option would be an equivalent of something like:

bh add-pkgsource github-org:gambit-:gambit-community

bh install would be modified so that if the argument is not a package name, but a git URL, the package is installed directly from the git URL and it does not need to be
..pre-downloaded you mean?

what about supporting both.. I thought there was at least support for direct downloading from remote by HTTP in there?

@alvatar
Copy link
Member

alvatar commented Jan 23, 2013

I tend to write too much, and I think is better to write shorter messages to explain ideas clearly. I like all this stuff of packages and repositories, but in my opinion Blackhole is starting to do too much. We don't have a macro system working well and we are thinking about fancy features.
Besides, I believe that something that has been shown to work well in other communities is what we should use, like gem (package installation) / bundle (package handling, packaging, versioning) / rake (task automatization)/ rails (project creation and rails framework specific tasks). The point is to do it more the "unix style", splitting tasks, and dividing problems. In my opinion BH should only do macro expansion and modules (because those two go hand-in-hand).

I believe putting so many responsibilities on BH is a dangerous path and will make BH even more complex and unmaintainable (even though the packages part is still bearable).

@eraserhd
Copy link
Member Author

I can imagine a downloader/installer/dependency constraint solver program that puts the sources within your project directory based on a Gemfile-like thing, leaving the expansion to another program (e.g. Black Hole). Does that other program still typically handle compilation, or is that unique to black-hole? What sort of directory structure would it use?

Does Black Hole have a way to map (import (srfi strings)) to ./packages/srfi/strings.scm (for example) currently?

This downloader/installer/dependency constraint solver is something I could put together easily, and it's the thing that I'm interested in the most at this point. Mostly because I can't package up my source for other people to use as easily as I'd like, and I imagine it hurts the gambit community to not know what other packages are out there or to have to struggle with getting them to work together.

@alvatar
Copy link
Member

alvatar commented Jan 24, 2013

Ruby's bundle manages an application environment and dependencies, relying on gem to install the packages from the repositories. Bundle is thus useful for distributin applications in source code format. Gem downloads/install/handles dependencies.

One thing I'd suggest is changing the way Blackhole saves the packages, and do it "the Gambit style" so people using plain Gambit would feel more confortable and could use this program. That means placing the libraries in a subdirectory of "~~", such as "~~lib".

Since Blackhole is going under revision, I believe that respecting more the "minimal" philosophy behind Gambit would help more people that weren't using it before as it expects completely different ways to organize your code, projects, etc.

Ideally, BH should be a scheme code processor. Then its output is fed to Gambit. Then a tool like sake that I've been developing for Scheme Spheres takes care of building a shared library, an executable or the hybrid forms that Gambit offers (like loading "o1" code in scheme, or just sticking all the code in one file, for max-performance benefits).

Actually, as Mikael knows, originally Per's idea of BH was exactly that: a scheme processor. But it grew.

One extra advantage would be that each one of us focus on one/several programs:

Blackhole: a macro expander and module system
Sbundle (?): a program to maintain a consistent environment for scheme applications
Sids (? SchemeInstallationDependencySolver): the equivalent of gem: install packages. This should install packages from this community repository, and should allow also other formats besides blackhole (in my opinion). That is, it should be code-agnostic.
Sake (it was called like this by Racketers, and by the original program that I started working on, I guess inspired in make and rake): Automatization. This becomes very important when you try to make programs for Android, but it is nice to have automatization of project tasks in Scheme and not in Bash, right?
Sfusion (I gave it this name, any better ideas?): create projects based on templates. Sort of what the rails command does for Rails projects.

Actually all this idea of naming "Spheres" to the scheme libraries for Scheme came from the ruby world. Of course I'm not overly tied to my own not-so-important ideas, like these names, but I thought it was a nice metaphor to start with.

@eraserhd
Copy link
Member Author

The problem I see with "~~lib" is that dependency versions can't be managed, unless we build out all of the rubygems stuff. For example, project A needs sack ~> 1.8.1 and project B needs sack ~> 2.3.0. And the rubygems gem-loading stuff is monkey-patched by bundler... argh.

Couple this with the fact that I now almost never trust the system gems and have a gemset for each project. I figured installing to the system was something we wouldn't need to deal with.

But, the cool thing is that, if we just use ##include, we can just make it (include "packages/srfi/strings#.scm"). We could make installing to the system an option if we want it.

@m-i-k-a-e-l
Copy link

I can imagine a downloader/installer/dependency constraint solver program that puts the sources within your project directory based on a Gemfile-like thing

Regarding directory choice, yes I guess the user's home directory or even the project directory are good choices now. Please note that the user may wish to have a package in a custom directory specified, and, that a user may want to continue the development of a package so they may not be downloaded from a remote location in the first place, and, their contents may be sensitive i.e. packages are not just something to toss away.

At some future point, there could be the use of system-global packages, though that's like way beyond the current design scope and so on, and involves lots of complexity like primarily privileges and secondary locking and subtle system-specific things that are far outside of any current agenda, so designing for the user's and project's dir now sounds good.

Might probably be some case where a project wants a custom package configuration, or just a slightly modified one, like, adding or removing something from the user-global one.

I can imagine a downloader/installer/dependency constraint solver program that puts the sources within your project directory based on a Gemfile-like thing, leaving the expansion to another program (e.g. Black Hole).

Spontaneously I'd guess that splitting these two mechanisms (module system and package dependency manager) into two programs would be much less powerful than keeping them under the same roof, interacting.

They can be separate files of course, and they can talk solely across a well-defined API.

Probably, the layer you describe here (package management) is under the module system, i.e. the module system goes

(package-manager-open packages-configuration) => p.m.

asking your code for a package manager instance that way, and then any time it actually wants to import a package - which is the only time it accesses packages, no? - it goes

(package-manager-import-session-open package-manager) => p.m.i.s.

(and a corresponding to release resources:

(package-manager-import-session-close p.m. p.m.i.s.) => #!void

)
The functionality your package manager exports is essentially that of a module code loader: For all imported modules into the Gambit environment, it keeps enough info in RAM as to determine if a file change has happened, and it provides the functionality of loading the module source file, and generating filenames for output files.

Within a 'import session', modification time is checked max once and then reused throughout the session, and perhaps also module content is cached as soon as it's been loaded.

I believe we go with Black Hole's current naming mechanism for module i.e. every module has an address consisting of a module resolver name - we can call that package name starting now! - and a module name. (import mod) means, import the module "mod" from the current package. (import (pkg mod)) means, load the module "mod" from the root of the package "pkg".

Your package manager module does NOT do any processing of actual module file contents, this is left completely to the module system to perform.

(package-manager-add-package p.m. p.m.i.s. package-name) => package-obj
(package-manager-add-module p.m. p.m.i.s. package-obj module-name) => module-obj
(package-manager-module-updated? p.m. p.m.i.s. module-obj) => boolean
(package-manager-module-content p.m. p.m.i.s. module-obj) => string with the module's contents
(package-manager-module-related-filename-generate
 p.m. p.m.i.s. module-obj filename-extension +counter?)
=> filename-string

+counter? is a boolean that tells if an additional counter should be added to the filename, e.g. (package-manager-module-related-filename-generate p.m. p.m.i.s. ".o" #t) leads to "module.o1" when there's no file with that name, when it exist leads to "module.o2", when that exists to "module.o3" etc. etc.

Perhaps also the module system needs to store state in a file that's package-global , so

(package-manager-package-related-filename-generate p.m. p.m.i.s. package-obj filename)
=> filename-string

Of course your package manager will take configuration options. I propose that it is not hardwired to configuration variables in the global scope, even if there are some default values for configuration there.

Regarding reading OS command line options and OS environment variables, I propose your package manager module does not access those directly.

I guess there are two kinds of commands that can come in on the command line and through OS env. vars.

  • Both via c.l. and e.v.:s: Configuration parameters, like, where to look for packages locally and remotely or what config files to load for this, called "config" below.
  • Via c.l. only: Commands for package operations, like, install, uninstall, called "operations" below.

For the functionality addressed by both of these, there should be ways to perform from within Scheme i.e. from the REPL. So inputs from the c.l./e.v.:s is only 'imported' to the internal format and then handled in that format.

For operations, your API can have something like

(package-manager-execute-operation/command-line p.m. command-line-args) => #!void

and then of course for every operation you do, you also have a Scheme procedure accessible to do that directly from within scheme, so the procedure above is just like a mapper-wrapper mechanism to those, so like

(package-manager-admin:install-package p.m. ....args...)
=> result, like boolean or error msg or sth
(package-manager-admin:uninstall-package p.m. ....args...)
=> result, like boolean or error msg or sth

Ordinarily, the module system's executable ("bh" etc.) is wired so that, if its first command line argument is "pkg", then it invokes |package-manager-execute-operation/command-line| with the remaining arguments only, and then exits.

For your development and testing, you can also make your own separate program ("bhpkg") whose only function is to pass on all command line arguments to make a p.m. instance through |package-manager-open| and then invoke |package-manager-execute-operation/command-line| with the p.m. and c.l.:s and then terminate.

As for config, all input configuration's ultimate destination is to become a |packages-configuration| structure as to be passed as argument to |package-manager-open|.

(Perhaps live config changes would be a feature of the future, if so look into that then.. I'd guess such changes would be made from within scheme anyhow. Like, (package-manager-configuration-package-acquirer-add! ..) etc. - to make that work, I'd guess the packages-configuration structure would need a slot that records what p.m. instances use it, so that new settings introduced live could be propagated right.)

Probably you have a set of default configuration settings that generally work, perhaps that's a good basis, so sth like

(package-manager-configuration-make-dfl)
=> a packages-configuration structure loaded with the default settings

And to load OS env and command line args,

(package-manager-configuration-import-os-env! packages-configuration os-env) => status
(package-manager-configuration-import-command-line-args!
 packages-configuration command-line-args)
=> list (args-consumed status), status = #t = success.

Ordinarily it's the module system's task to invoke these right. As for OS env, I believe the package manager can get a verbatim copy of all of it.

As for command line args, probably the module system will have a routine to iterate through all command line args, and either the args it does not make sense out of otherwise, or, before checking a respective argument, it invokes |package-manager-configuration-import-command-line-args!| to check if the respective args are destined to the p.m. . If the p.m. identifies certain args, then it will report how many args it consumed through |args-consumed| and pass status = #t.

Only for the case that there are args that are such that the procedure picks up them exclusively (through a name like "--pkg:[some option]" it identifies exclusively) and finding the content of that to be wrong, should it return a non-#t success. And if there's a non-#t success, then we terminate the application because it's a fatal error, right? It's not like passing an include path that doesn't exist, but, it's like passing an argument that doesn't exist, that leads to termination in every app.

Please note that the above API makes a closure out of your, there's no reliance on globals, this is a great feature.

I believe this kind of API would give you the space to make a high-quality, robust dependency downloader and package management infrastructure, while maintaining the extremely high level of flexibility that BH provides in working at the granularity of modules and not packages.

Your package manager, you can make it programmable to some extent, so it's easy to add new methods for package and module content acquisition. I mean, this downloading all gambit- prefixed repo names from the gambit-community Github account, that's a pretty good example of such a method, is it not :)

What about calling them package-acquirer:s, or do you have a better suggestion?

Such a package-acquirer would be responsible at least for the downloading of modules within a package ascribed to it. If you have any suggestion for an API and/or specification now that clarifies this, please feel free to do so.

Does that other program still typically handle compilation, or is that unique to black-hole? What sort of directory structure would it use?

I don't know about any other program, but as for Black Hole, it does handle compilation.

Does Black Hole have a way to map (import (srfi strings)) to ./packages/srfi/strings.scm (for example) currently?

Yes, it's called 'package resolvers' and pretty much what you describe already exists - in legacy and master BH there's a "std" module resolver that maps to ~~/modules/std , and in master BH there's a "srfi" module resolver that maps to ~~/modules/srfi or sth. The code is in there and clear.

Whether current BH would go together with the API I outlined above, I do not know.

Note though that the API outlined above is a solid abstraction, so that is like a complete solution to the package deps loading problem (from the perspective of the module system), so this would be a great way for you to implement the package management functionality you describe.

The challenge after this mechanism you implemented is in place, is that there needs to be a module system to use it. Indeed, the API outlined above can easily be implemented in a rudimentary variant, so it would be easy to make a module system that "speaks" the API above and implements only a rudimentary variant of it, and then when your version of the package management with the API above is ready, it's just a question of loading your object file instead of the rudimentary one.

Please refer to the PM i sent you for more on current status on this discussion.

This downloader/installer/dependency constraint solver is something I could put together easily, and it's the thing that I'm interested in the most at this point. Mostly because I can't package up my source for other people to use as easily as I'd like, and I imagine it hurts the gambit community to not know what other packages are out there or to have to struggle with getting them to work together.

I agree with you, this is major.

Now, the above is a pretty specific proposal, you hope you appreciate it as a proposal draft or at least to be taken into good consideration as food for thought.

Please let me know what you think about it.

In particular, if you think it looks good generally, then please give close thought to if there's anything that leaks about it as an abstraction and share any findings.

My hope with suggesting it about was that it would bring a sense of clarity that this is a practical and complete solution interface-wise to how to do package loading and management, so that perhaps with a bit of additional work we could get to a consensus on that such an interface is good and that this way [that we get to] of doing it is completely satisfactory.

If you think this is the right way, then, I guess next steps would be:

  • If Jason can propose a format for package configuration files and organization of those files, and how the internal packages-configuration structure represents those
    (I guess it's a good idea to keep that as separate files generally)
  • If you think package-acquirers or whatever we call them would be a good idea - spontaneously I think so but please share your thoughts - so, if so, propose an API for them.

Looking forward to hear your thoughts on this.

@m-i-k-a-e-l
Copy link

Re ~~ or ~~lib:

One thing I'd suggest is changing the way Blackhole saves the packages, and do it "the Gambit style" so people using plain Gambit would feel more confortable and could use this program. That means placing the libraries in a subdirectory of "~~", such as "~~lib".

There's an issue with write privileges here! It maps to /usr/local/Gambit-C/lib which is writable by root only. So it was for readonly only it'd work. Now it's not. Therefore what about a ~/bh or ~/.bh ?

Since Blackhole is going under revision, I believe that respecting more the "minimal" philosophy behind Gambit would help more people that weren't using it before as it expects completely different ways to organize your code, projects, etc.

Ideally, BH should be a scheme code processor. Then its output is fed to Gambit. Then a tool like sake that I've been developing for Scheme Spheres takes care of building a shared library, an executable or the hybrid forms that Gambit offers (like loading "o1" code in scheme, or just sticking all the code in one file, for max-performance benefits).

Actually, as Mikael knows, originally Per's idea of BH was exactly that: a scheme processor. But it grew.

For BH to be an effective tool for incremental development, it must have a bit of state beyond just code processing with a single module as scope.

Essentially it needs to keep track of identifier and macro exports, and have a list of loaded modules so it knows what to ask for (that's

module-obj
in the previous post).

For me the incremental dev aspect of BH is what brings all the value: you change a file somewhere and go (import your-root-module-or-some-other-module-that-depends-on-your-file-somewhere) or just (import file-somewhere) and it and any deps are correctly compiled if needed and loaded automatically.

The problem I see with "~~lib" is that dependency versions can't be managed, unless we build out all of the rubygems stuff. For example, project A needs sack ~> 1.8.1 and project B needs sack ~> 2.3.0. And the rubygems gem-loading stuff is monkey-patched by bundler... argh.

I don't know about monkey-patching by gem-loading/-er, if there's any wisdom to share in there please feel free to do so.

~~lib has the problem it's not writable. As for using another directory like ~/bh or ~/.bh , I guess there would not be any issue, as long as you have version name in the package's directory name or make a subdirectory with the version name, what do you say and which way do you think is best?

Couple this with the fact that I now almost never trust the system gems and have a gemset for each project. I figured installing to the system was something we wouldn't need to deal with.

Em I recognize that kind of experience. Let's not do that mistake here [as to get a system that maintains a constant sense of non-trust in its users]. =)

But, the cool thing is that, if we just use ##include, we can just make it (include "packages/srfi/strings#.scm"). We could make installing to the system an option if we want it.
##include support is a separate topic. I'm not sure I got your point with this one.

@alvatar
Copy link
Member

alvatar commented Jan 24, 2013

Wow, very long post! :)

I'll say very shortly the idea I had in mind for Scheme Spheres about versioning:
Use GIT.
It already does all that for us: it switches between tags so we can have ALL versions in one place. Local versions to each project could be brought directly into it.

@alvatar
Copy link
Member

alvatar commented Jan 24, 2013

About ~~lib / ~/.bh

The privileges problem is there, although I've seen many people just using it around the internets. Installing is usually done by the root anyway, and all python, ruby and the rest of languages install in /usr folders. Scheme shouldn't be less, right?

Anyway, for user directories (I don't like this idea very much, and doesn't scale well for multiple users), if I had to choose, I'd choose one that states "Gambit", not "Blackhole":
~/.gambit-lib
~/.gambit

Either this or just an evinronment variable and everyone chooses. That could make everyone happy.

GAMBIT_LIB_PATH

@m-i-k-a-e-l
Copy link

One extra advantage would be that each one of us focus on one/several programs:

Blackhole: a macro expander and module system
Sids (? SchemeInstallationDependencySolver): the equivalent of gem: install packages. This should install packages from this community repository, and should allow also other formats besides blackhole (in my opinion). That is, it should be code-agnostic.

I'd believe the API and design as per above should satisfy this, though, as you notice it's such a close integration that they need to be in the same Gambit process and running in tandem, even if they're otherwise completely separated and pluggable.

So code-agnostic is fine.

Sbundle (?): a program to maintain a consistent environment for scheme applications

What would the purpose of this be, use case?

Sake (it was called like this by Racketers, and by the original program that I started working on, I guess inspired in make and rake): Automatization. This becomes very important when you try to make programs for Android, but it is nice to have automatization of project tasks in Scheme and not in Bash, right?
Sfusion (I gave it this name, any better ideas?): create projects based on templates. Sort of what the rails command does for Rails projects.

Actually all this idea of naming "Spheres" to the scheme libraries for Scheme came from the ruby world. Of course I'm not overly tied to my own not-so-important ideas, like these names, but I thought it was a nice metaphor to start with.

The kind of philosophy you reflect here, I'm completely good with.

As I perceive it in this moment, it's matters "beyond"/atop BH with the macro and package mechanism it uses. The BH macro & package level is all I'm concerned with for my purposes, I'm happy that they scale for uses on higher levels though such as those you propose here.

I hope that the current design discussed satisfies all that as-is. So given these higher levels don't suck or constrain development focus from the BH module&packages level - I'm sure they would not, just wanted to mention this because I can only put time into the module&packages level - , I'm happy to be aware that you're working with those things and I'l be very curious to see what you come up with, there ought to be Lots of relevant things in there and I'm sure it'll be of joy and good use.

I'll say very shortly the idea I had in mind for Scheme Spheres about versioning:
Use GIT.
It already does all that for us: it switches between tags so we can have ALL versions in one place. Local versions to each project could be brought directly into it.

Sounds like something, at least as primary package acquisition mechanism. Perhaps you can write like a guide or spec showing what operations are done, how, and with what purpose.

About ~~lib / ~/.bh

The privileges problem is there, although I've seen many people just using it around the internets. Installing is usually done by the root anyway, and all python, ruby and the rest of languages install in /usr folders. Scheme shouldn't be less, right?

Good point!

Yeah that's a good way of making packages go global. Could be the first place in the "search path" for packages.

Anyway, for user directories (I don't like this idea very much, and doesn't scale well for multiple users), if I had to choose, I'd choose one that states "Gambit", not "Blackhole":
~/.gambit-lib
~/.gambit

Sure!

Either this or just an evinronment variable and everyone chooses. That could make everyone happy.

GAMBIT_LIB_PATH

Sure!! What about a that as search path akin to the PATH env variable for OS binaries.

Another setting will be needed for specifying where a package should be downloaded and installed. Do you have any suggestion on how that is solved?

Packages in user directories are needed, for instance if you start making a code project and it involves you making a new package that your new project uses, then you want to keep that in your user dir or even your project dir. You constantly make changes to both and they're at least currently specific to exactly the thing you're working with in the moment, beyond everything it would not work to put those in /usr .

So a bit like in Unix when you have a library in your home directory and you make changes to it, generally you can make a local executable use the particular library file you work with in your home dir, no need to make install into /usr to test your new changes.

@eraserhd
Copy link
Member Author

"Sids" and "Sbundle" are what I'm interested most in working on. I still think these two should be the same program, though I can be convinced. Mostly because I don't see a use for Sids by itself... I almost never use gem by itself, actually. And Sbundle will need to have all the repository-querying capabilities that Sids has.

Sids is not a good name, as it usually stands for Sudden Infant Death Syndrome. :)

We can use git's method, and name the master executable "sphere", and then have /usr/libexec/sphere/bundle and /usr/libexec/sphere/install which sphere will dispatch to and which we can install more commands into, so we don't need to come up with cute names for everything.

@m-i-k-a-e-l
Copy link

"Sids" and "Sbundle" are what I'm interested most in working on. I still think these two should be the same program, though I can be convinced. Mostly because I don't see a use for Sids by itself... I almost never use gem by itself, actually. And Sbundle will need to have all the repository-querying capabilities that Sids has.

Might be possible to make as complete separate Gambit modules, with extremely straight interface as to bridge them e.g. (make-sbundle my-sid).

Of course if relevant the respective one could be exported as a separate executable too - the REPL console in itself is good and satisfactory in itself, and exports to the Unix console are good too of course, sometimes very useful.

We can use git's method, and name the master executable "sphere", and then have /usr/libexec/sphere/bundle and /usr/libexec/sphere/install which sphere will dispatch to and which we can install more commands into, so we don't need to come up with cute names for everything.

Right, so Sids is the package management module as related to in the API propsal draft above, looking forward to your response regarding it.

Re SBundle, can you please explain the idea to me in detail? I'm not introduced to that since before at all so have no clue about what you're talking about regarding it currently.

@alvatar
Copy link
Member

alvatar commented Jan 24, 2013

Bundler, the ruby version I'm refering to takes care of installing everything an application/library needs to work:
http://gembundler.com/

The most important thing in my opinion is that it encodes in a file the environment the program needs for working, mostly dependencies and versions.

You really see the usefulness of this for example when working with Heroku: you don't have to configure anything of the production environment in Heroku because everything is in Gemfile.

Sids (by the way really bad name hahah) and sbundle could become one program, that's ok.
But certainly independent of Blackhole. It's a package management.

Blackhole: produce Scheme code that can be consumed by Gambit. It will need state and all that, but after all is what blackhole does: it prepares an environment for Gambit and a code to compile. That means maybe producing two files: macro expansion environment and code to compile, as I do now with Alexpander (actually, the syntactic tower version of BH produces 4 files). This way, Blackhole can be made to play nicely with many different possible workflows, since it is the Gambit's way. So Blackhole can be used by people doing their own small projects in the old Gambit way, can be used for iOS and Android with their own complicated stacks with ease.
Gama (GAmbit MAnager, or GAM): basically the functionality of gem+bundler
...the rest I think are not a matter of this discussion and I'm working on them already.

@m-i-k-a-e-l
Copy link

Bundler, the ruby version I'm refering to takes care of installing everything an application/library needs to work:
http://gembundler.com/

The most important thing in my opinion is that it encodes in a file the environment the program needs for working, mostly dependencies and versions.

Ok, so like a package configuration tool, sure sounds nic!

Sids (by the way really bad name hahah) and sbundle could become one program, that's ok.
But certainly independent of Blackhole. It's a package management.

Let's remember that in Gambit each module file can be a separate "program". The program definition is really blurry in Lisp. Program in the sense, a binary to launch from the console that does only one thing, almost nothing is.

Sure sids and sbundle can become one "program", that's fine.

Though I very strongly argue that Black Hole needs a very close, live interaction with it, this should be clear from how the API above is formulated -

We can split it up in a sids+sbundle.scm and a bh.scm each having separate deps,

but there must be exports from sids+sbundle akin to the API proposed above, for BH to interface the actual package data through with that level of granularity and direct-ness.

Anything else would be ineffective or at least duplication=not smart.

Sure, thanks to this abstraction I guess you can get to a place where you use either sids+sbundle or BH individually - making sids+sbundle do valuable things would need only little bootstrap code, and making BH do valuable things individually would be done by providing it with any other package-manager library of your choice, of course you can get it to do anything, yes sure its core function is processing code into a format Gambit can take directly.

What do you say?

I'll check this idea with Per soon, he ought to have clarity about if this is a solid abstraction or if there's something we didn't understand about it yet that would undermine the value/make it leak.

@alvatar
Copy link
Member

alvatar commented Jan 24, 2013

Well, these two things are the things I want to raise awareness, above any other. If you have compelling reasons to integrate the package manager into Blackhole, go for it, but:

  • Separation of concerns is a widely known recipe for success, although not last word
  • Due to techonolgies like Termite, gambit's serialization facilities, hot code swapping, remote mobile programming, and the like... we should keep the core of blackhole independent and minimal, so it can be used for running code with macros with all these techniques. I'm currently using that for mobile development and I argue that being a killer feature in a growing industry with zillions of possibilities, we shouldn't miss this opportunity as originally BH missed it.

Those both lead to the same thing: simple and minimal. And we're schemers, so we sure appreciate it ;)

ps: well, you can always pack alexpander for the second point, but it would be soooo sad (and a guaranteed pain as well)

@m-i-k-a-e-l
Copy link

Ok, just to make the API proposition super clear so we're all on the same page, I'll describe how BH typically uses the API:

  • BH start:
    Configure and open a package-manager instance. BH has one only so we can call it the package-manager or just p.m.

Sth like:

(define pm-config (package-manager-configuration-make-dfl))
(package-manager-configuration-import-os-env! pm-config [os-env])
[The right package-manager-configuration-import-command-line-args! invocations,
 based on what the command line is]
(define pm (package-manager-open pm-config))
  • On any import, it does:
(define pmis (package-manager-import-session-open pm))

For any module directly or indirectly imported,

If its package was not yet imported then

(package-manager-add-package pm pmis [its package name])
and store the result

If the module was not yet imported, then

(package-manager-add-module pm pmis package-obj [the module's name within the package])

and store the result

On these two invocations, pm (aka sids+sbundle) has the opportunity to do any much lookups and downloading it wants.

If it's an import of a module that's already been imported, then a check with

(package-manager-module-updated? pm pmis module-obj)
is done, and actual re-import is only done if #t.

For a module to actually import, presuming no update was made, BH now invokes

(package-manager-module-related-filename-generate pm pmis module-obj ".o" #f)
, string-appends "*"

onto the result, and makes a filesystem search for such files. If any result is found, it takes the one with
the highest number (-".o55" etc.) and |load|:s it - this is to load the compiled version.

If no such file is found, it invokes

(package-manager-module-content pm pmis module-obj)
, and pm returns
the string contents of it.

BH processes all the source, and if applicable invokes |compile-file| with a filename produced through
(package-manager-module-related-filename-generate pm pmis module-obj ".o" #t) as target.

At the end of the import,

(package-manager-import-session-close pm pmis)
is invoked.

Probably something will not be made exactly this way, for instance perhaps PM will keep track of which is
the newest .o[N*] file because it needs to do that for |package-manager-module-updated?|'s logics to work,
so it can just return the filename it finds, for convenience.

Please let me know what you think, I hope the above makes sense.

@m-i-k-a-e-l
Copy link

The PM (aka sids+sbundle) can export these procedures through a vector where one slot is for each accessor, this way to use this particular PM implementation, only one argument is needed to be passed to BH's creator/open procedure.

@m-i-k-a-e-l
Copy link

Well, these two things are the things I want to raise awareness, above any other. If you have compelling reasons to integrate the package manager into Blackhole, go for it, but:

  • Separation of concerns is a widely known recipe for success, although not last word
  • Due to techonolgies like Termite, gambit's serialization facilities, hot code swapping, remote mobile programming, and the like... we should keep the core of blackhole independent and minimal, so it can be used for running code with macros with all these techniques. I'm currently using that for mobile development and I argue that being a killer feature in a growing industry with zillions of possibilities, we shouldn't miss this opportunity as originally BH missed it.

Those both lead to the same thing: simple and minimal. And we're schemers, so we sure appreciate it ;)

yeah!

ps: well, you can always pack alexpander for the second point, but it would be soooo sad (and a guaranteed pain as well)

yeah

this struck a chord with me.. hmm..

so basically it's like, you want the BH core to be so pluggable that you can run it on an Android device and feed it realtime somehow? - yeah that would be cool, and it's more than realistic to do.

is this the point, or what do you see beyond this?

so above you said

remote mobile programming

so this was about that, also you said

Termite, gambit's serialization facilities

did you have any particular idea here, if so which, or was it only indicative?

can you please describe further this kind of usecase and what possibilities that come with it/that you see?

to this android machine, would you use this mechanism for feeding it from remote with interpreted code only, or C object binaries too? (perhaps when there's the native backend this will be less of a diff)

so, please explain what you see so i get the idea...

would it be enough that there's simply a remote tunneling "proxy" package-manager (i.e. sidr+sbundle) implementation that is a client to a "server" you run on your desktop dev machine, that will send over any code the device asks for?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants
@eraserhd @alvatar @m-i-k-a-e-l and others