Why not the following Dockerfile:
# Build stage
FROM node:alpine AS builder
WORKDIR /usr/src/app
COPY package.json package-lock.json* ./
RUN npm install --save-dev @types/node
COPY . .
RUN npm run build
# Production stage
FROM node:alpine
WORKDIR /usr/src/app
RUN npm install -g serve
COPY --from=builder /usr/src/app/dist ./dist
EXPOSE 80
CMD ["serve", "-s", "dist", "-l", "80"]
For some reason I cannot use nginx solution above. Not sure if it is because I am using nginx as another docker to orchestrate many dockers to work together. I use npm serve function. Not sure what is the downside. Would love to hear any comment from you.
My idea is, you might have not set the Service Account permission on the IAM & Admin, try to locate your IAM & Admin in Google Cloud Console and make sure that the Document AI user has permission for each.
There is also a quota on API usage that shows the 403 Forbidden error, try to check it also at the IAM & Admin then Quotas to check your allocation.
I think your best choice might be to try Laravel Herd, or better still, use Laravel Valet. It might give you the closest to what you can get a prod environment feel.
I have the same problem, as do others. It only seems an issue with a list long enough to nessesitate a fair amount of scrolling and becomes more evident as you scroll up and down a few times while tapping onto the items.
Two ways to run jest without using npm and without global install, but you will need to install npm (and have run npm install
):
npx jest
Or
node node_modules/jest/bin/jest
This is probably because the image by default has a 'display: inline' you can try 'display: inline-block', or you can put in inside a div and rotate and float the div.
According to this page, you might need to include the --allow-unk so that unknown words can appear in the output.
Here is a Scrappy Workaround way, though it might not be as you intended
<table>
<tr>
<td width="500px" align="left">⬅️ Left</td>
<td width="500px" align="right">Right ➡️</td>
</tr>
</table>
If you're looking for a solution to handle large files in a decentralized way, you might want to consider using GraphDB, a distributed graph database that leverages the Origin Private File System (OPFS) for efficient file storage. Unlike GUN.js, which has a 5MB localStorage
limitation, GraphDB is designed for scalable storage and real-time data modeling. You can explore it further and give it a try here: GraphDB. and you can see the documentation here GraphDB Wiki.
With json v2, you can have this struct
type Foo struct {
// Known fields
A int `json:"a"`
B int `json:"b"`
// The type may be a jsontext.Value or map[string]T.
Unknown jsontext.Value `json:",unknown"`
}
Official documentation and example here: https://pkg.go.dev/github.com/go-json-experiment/json#example-package-UnknownMembers
You can now customize the severity of specific events, and setting an event to ResilienceEventSeverity.None
will suppress the logs.
https://www.pollydocs.org/advanced/telemetry.html#customizing-the-severity-of-telemetry-events
services.AddResiliencePipeline("my-strategy", (builder, context) =>
{
var telemetryOptions = new TelemetryOptions(context.GetOptions<TelemetryOptions>());
telemetryOptions.SeverityProvider = args => args.Event.EventName switch
{
// Suppress logging of ExecutionAttempt events
"ExecutionAttempt" => ResilienceEventSeverity.None,
_ => args.Event.Severity
};
builder.ConfigureTelemetry(telemetryOptions);
});
because during a series of redirects the page context becomes invalid, and waitForNavigation can stop on one page of the redirects and / or the neccessary selector is missing and waitForSelector faild - because of : JSHandles can be evaluated!
But in my case I need to find an element on the final page after a series of redirects and That means the page has already finished with redirects!
All the suggestions above fails with an error.
I came to a crude solution - a try-catch loop with pauses (sleeps), which simply works...
Based on comments, i decided to remove as much as i can until the error was gone and eventually the only thing in the cpp file was int main() { return 0:} and i still had the error!
So i deleted the project and created a new editor project, which actually works now.
<----------------Cannot find 'GeneratedPluginRegistrant' in scope---------------->
Remove This -----> GeneratedPluginRegistrant.register(with: self)
What does the Postgres log file say about it? It is refusing the connection, so why?
I think Postgres by default has pretty tight security and as such will only allow known connections.
There is a new set of tools for performing SQL Server migrations at https://github.com/Schema-Smith/SchemaSmithyFree. They are free to use for small businesses and teams under a community license. All code is available in the repository, no black boxes or hidden anything.
The update process, SchemaQuench, takes a state based approach to update tables. You define the final state you want your table to look like and the update process does whatever alter statements are necessary to get you there. SchemaQuench can apply all of your objects, tables, functions, views, etc as well.
There is also a tool for reverse engineering your database(s) into the expected metadata format all of which can be checked into your repository just like any other code.
Documentation can be found at https://github.com/Schema-Smith/SchemaSmithyFree/wiki
When using androidx.appcompat.widget.Toolbar
then execute ToolbarUtils.getActionMenuView(toolbar).setExpandedActionViewsExclusive(false);
after inflating the menu in onCreateOptionsMenu
, this seems to resolve most issues with an expandActionView
call. The workaround use library restricted features, so they may be removed at any time.
I found it, I think
environment:
- ENV_VARIABLE_IN_CONTAINER=$ENV_VARIABLE_ON_HOST
seems to have worked.
Try below one, it will help to create own package/application id
npx @react-native-community/cli init MyTestApp --package-name com.thinesh.mytest --title "My Test App"
How your infrastructure looks like? С program on one computer, browser - on another? Do you have any web-server?
There are a lot of different ways.
I ultimately reached out to AWS support with this same question and they confirmed that it is not possible due to the fact that Google initiates the link between it and Cognito and therefore does not allow you to pass any data into that pre sign up trigger. I understand this is likely by design to not allow any additional data passed in with the google token to not allow anyone to manipulate the sign up process. Fully understandable from Google's perspective.
In terms of what to do, it's a pretty niche use case to require linking two accounts with different emails. Moving forward I will add the user's business email to the newly created social account as an attribute from the session once they are logged in. I will end up with duplicates of some users in Cognito but I will have to live with that. The social login account will be the one with access when the user logs in again. I could even run a lookup on users' attributes to find if someone has a separate account when logging in.
here is the most accurate way to find the maximum average salary:
select max ( avg(salary))
from workers
group by worker_id;
You can apply a group function to a group function retreiving a column of data values, ie max to avg(salary).
You can create a util method to wrap the amount.:
private BigDecimal resolveScientificAmount(BigDecimal amount) {
if (amount == null) {
return null;
}
return new BigDecimal(amount.toPlainString());
}
Check the Anaconda environment you are working on, because it might be installed in another environment, so you can exit the current one you are working on by running: conda deactivate
and then check again if you are able to import the module.
English
You're encountering this issue because you're trying to load a .pyd
(native extension module) using a SourceLoader
, which is intended for .py
files (text). Native extensions like .pyd
must be loaded using ExtensionFileLoader
— otherwise, you’ll get DLL or encoding-related errors.
Assuming you dynamically created this structure:
/a/temp/dir/foo/
__init__.py
bar.pyd
You can correctly import the package and the .pyd module like this:
import sys
import importlib.util
import importlib.machinery
import os
def import_pyd_module(package_name, module_name, path_to_pyd, path_to_init):
# Ensure package is imported first
if package_name not in sys.modules:
spec_pkg = importlib.util.spec_from_file_location(
package_name,
path_to_init,
submodule_search_locations=[os.path.dirname(path_to_init)]
)
module_pkg = importlib.util.module_from_spec(spec_pkg)
sys.modules[package_name] = module_pkg
spec_pkg.loader.exec_module(module_pkg)
# Load the compiled submodule
fullname = f"{package_name}.{module_name}"
loader = importlib.machinery.ExtensionFileLoader(fullname, path_to_pyd)
spec = importlib.util.spec_from_file_location(fullname, path_to_pyd, loader=loader)
module = importlib.util.module_from_spec(spec)
sys.modules[fullname] = module
spec.loader.exec_module(module)
return module
Your MyLoader
inherits from SourceLoader
, which expects a .py
file and calls get_data()
expecting text. This cannot be used for .pyd
files, which are binary and must be handled using the built-in ExtensionFileLoader
.
If you want a custom import system with dynamic .pyd
loading, you can still use a MetaPathFinder
, but your loader must delegate to ExtensionFileLoader
.
Usage:
mod = import_pyd_module(
package_name="foo",
module_name="bar",
path_to_pyd="/a/temp/dir/foo/bar.pyd",
path_to_init="/a/temp/dir/foo/__init__.py"
)
mod.some_method()
Spanish
Estás teniendo este problema porque intentas cargar un módulo compilado .pyd
usando un SourceLoader
, que está diseñado para archivos .py
(texto fuente). Los archivos .pyd
(extensiones nativas en Windows) deben cargarse usando ExtensionFileLoader
, o de lo contrario recibirás errores como el de carga de DLL o problemas de codificación.
Supongamos que creaste dinámicamente la siguiente estructura:
/a/temp/dir/foo/
__init__.py
bar.pyd
Puedes importar correctamente el paquete y el módulo .pyd
con el siguiente código:
import sys
import importlib.util
import importlib.machinery
import os
def importar_pyd_como_modulo(nombre_paquete, nombre_modulo, ruta_pyd, ruta_init):
# Asegurar que el paquete esté importado primero
if nombre_paquete not in sys.modules:
spec_pkg = importlib.util.spec_from_file_location(
nombre_paquete,
ruta_init,
submodule_search_locations=[os.path.dirname(ruta_init)]
)
modulo_pkg = importlib.util.module_from_spec(spec_pkg)
sys.modules[nombre_paquete] = modulo_pkg
spec_pkg.loader.exec_module(modulo_pkg)
# Cargar el submódulo compilado
nombre_completo = f"{nombre_paquete}.{nombre_modulo}"
loader = importlib.machinery.ExtensionFileLoader(nombre_completo, ruta_pyd)
spec = importlib.util.spec_from_file_location(nombre_completo, ruta_pyd, loader=loader)
modulo = importlib.util.module_from_spec(spec)
sys.modules[nombre_completo] = modulo
spec.loader.exec_module(modulo)
return modulo
MyLoader
Tu clase MyLoader
hereda de SourceLoader
, lo cual no sirve para archivos .pyd
, porque espera archivos fuente .py
y llama a get_data()
esperando texto plano. Sin embargo, .pyd
es binario, y debe manejarse con el loader nativo de extensiones: ExtensionFileLoader
.
Si deseas implementar un sistema de carga personalizado, puedes seguir usando MetaPathFinder
, pero el loader debe delegar a ExtensionFileLoader
.
I am using the AWS CDK for .Net today and experienced the same issue as described above. I always got an error (could not find Policy node) when attempting to the the dependency like in Hayden's example.
I was able to resolve the issue by updating the Amazon.CDK.Lib nuget package from 2.191.0 to 2.195.0
Check out Defang.io - single command (defang compose up) to deploy your Docker Compose project to your account on AWS / GCP / DigitalOcean. Supports networking, compute, storage, LLMs, etc. Even integrated into IDEs such as VS Code, Cursor, Windsurf via their MCP Server so you can deploy straight from the IDE.
You can also grep everything before 'list:' and pipe to grep for email:
grep -B 1000000 "list:" example.txt | grep "[email protected]"
I don't think there is a built-in method to do this, but what's stopping you from using sites to format JS code? Js-beautify should meet your needs.
This looks like a similar issue to this question to summarize:
Is the profile you used for signing a public trust one? Publicly trusted certificates should be trusted by default as those are included on latest versions of windows 11 for more information, see here.
If you're signing with a privately trusted certificate, you will need to install the root on your system in order for it to be trustable. Here's a sample on how to do it using powershell.
I'm faced with the same issue. Did you ever sort this out?
I try you code but got an error
It should be possible with the correct API permissions for the Graph API. You need one of the following Permissions to do your request:
The API permissions need to be added to your App Registration that you created for your GitHub Workflow. In the end it should look like the image at the bottom.
In Dapper, the order of columns matters when performing 1-to-many mappings using the Query
method with multi-mapping. Here's what you need to know:
Id Column Position:
The split-on column (usually the Id) must appear first in the column sequence for each entity
Dapper splits rows when it sees a duplicate Id in this column.
Proper Sequence Example:
sql
SELECT
u.Id, u.Name, -- User columns (must start with Id)
p.Id, p.Title, p.UserId -- Post columns (must start with Id)
FROM Users u
JOIN Posts p ON u.Id = p.UserId
As shared by @Marc, Microsoft.Data.SqlClient v6+ works with .net8+ and as per @it-all-makes-cents Microsoft.Data.SqlClient v5.2.3+ works with .net6 . thank you guys for solving this issue.
The best way to do it is to use the overrides in styles.scss https://material.angular.dev/components/checkbox/styling
:root {
@include mat.checkbox-overrides((
label-text-font: Open Sans
));
}
English:
You're absolutely right in your intuition: threads in a thread pool are not supposed to die after running one Runnable
. The whole point of a thread pool is to reuse threads, avoiding the overhead of frequent thread creation and destruction.
Code:You're absolutely right in your intuition: threads in a thread pool are not supposed to die after running one Runnable. The whole point of a thread pool is to reuse threads, avoiding the overhead of frequent thread creation and destruction.
Code
public void run(){
while(!isStopped()){
try{
Runnable runnable = (Runnable) taskQueue.dequeue();
runnable.run();
} catch(Exception e){
// log or otherwise report exception,
// but keep pool thread alive.
}
}
}
…is actually doing exactly what a thread pool worker thread should do. The while (!isStopped()) loop ensures that the thread:
Waits for tasks from the queue,
Executes them, and
Loops back to wait again, rather than terminating.
This loop enables the thread to stay alive, sleeping (or blocking) when no tasks are available, and waking up when a new one arrives. It doesn't die after runnable.run() – it continues waiting for the next task.
So no, the thread is not garbage collected after a single task unless the pool is being shut down or the thread terminates due to a fatal exception. Instead, it's parked (blocked) on the task queue, consuming little to no CPU.
Spanish:
Tu intuición es completamente acertada: los hilos dentro de un thread pool no mueren después de ejecutar un solo Runnable
. Justamente, el propósito de un pool de hilos es reutilizar los mismos hilos para múltiples tareas, reduciendo así el costo de rendimiento asociado a crear y destruir hilos frecuentemente.
Codigo:
public void run(){
while(!isStopped()){
try{
Runnable runnable = (Runnable) taskQueue.dequeue();
runnable.run();
} catch(Exception e){
// log o manejar la excepción,
// pero mantener el hilo del pool vivo.
}
}
}
…en realidad representa correctamente el comportamiento de un hilo trabajador (worker) en un thread pool. La condición while (!isStopped())
asegura que el hilo:
Espera por tareas desde una cola,
Las ejecuta, y
Vuelve a esperar, en lugar de finalizar.
🔁 Este bucle permite que el hilo siga vivo, esperando (bloqueado) cuando no hay tareas, y despertando automáticamente cuando se encola una nueva. No se destruye ni se recolecta por el GC después de cada tarea.
Dicho de otra forma: el hilo entra en estado inactivo (bloqueado, sin consumir CPU) y se reutiliza cuando llega la siguiente tarea. Solo muere cuando el pool se cierra (shutdown()), o en casos de errores fatales no controlados.
Following the Nova upgrade guide, you need to update the register
method in your application’s App\Providers\NovaServiceProvider
class to call the parent’s register
method. The parent::register()
method should be invoked before any other code in the method:
public function register(): void
{
parent::register();
}
Simply add half second then truncate.
myInstant.plusMillis(500).truncatedTo(ChronoUnit.SECONDS);
I'm trying to build a Discord bot that generates images, but I faced a problem. I'm getting the following error:
Configuration is not a constructor
I'm following a YouTube tutorial. I have the following code:
const { SlashCommandBuilder, EmbedBuilder} = require(discord.js
);
const { Configuration, OpenAIApi } = require("openai");
const configuration = new Configuration({ apiKey: 'My Key' }); const openai = new OpenAIApi(configuration);https://www.bastcar.com/
I found a method, but it is not complete. I wrote a T4 file as follows:
<#
// Define your model types
var models = new string[] { "Academic", "Ostad", "Course" };
#>
using Core.Interfaces;
using DataAccessLibrary.Models;
using Microsoft.AspNetCore.Mvc;
namespace Snapp.site.Controllers
{
<#
foreach (var model in models)
{
#>
public class <#= model #>Controller : GenericController<<#= model #>>
{
public <#= model #>Controller(IRepository<<#= model #>> repository) : base(repository) { }
}
<#
} // Closing foreach loop
#>
}
But I can't dynamically generate the models array.
The how was easy enough, there is a not so hard to find "Package sources" setting in the settings.
The what took me a bit longer: https://api.nuget.org/v3/index.json
$requirement.verifyMethod.name works!!
Gambling and math are an underrated combo! Because behind every spin of the slot or move in the game is probability, not magic. That is why I am interested in exploring formats such as Sweepstakes on sixty6, where you can enjoy the game mechanics without risking your wallet - just have fun, testing your logic and luck in a safe environment.
Region
"region" (new (" when attempting to fetch resource.", "indonesia"))
accept console.: Region.jsm: "region"
You need to use the GetKeyedService
or GetRequiredKeyedService
extension methods on IServiceProvider
.
docker build -t my-username/my-image:1.0.0 .
Ola, poderia me ajudar? no caso ToolKit vem da onde? como ele esta declarado no começo do XAML?
Obrigado.
@Shayki Abramczyk . how to get list of all attachments details if attached during Test Run/TestResults in Testplan
If you use IIS you can have:
Now you have one website on iis working with 1 port for your both application
I had the same issue, Running
npm add next@latest
fixed it for me
For visitors to this question: NerfStudio will export a point cloud and/or mesh w Poisson surface reconstruction.
NerfStudio takes some python knowledge and installation setup, but it will do the job.
I had a similar problem, I'm on python 3.13 which I think helped but I ended up using string annotations. I think there's issues with 3.8 type hinting.
Did you verify salesforce cli installation outside of vscode first?
You can do that by opening terminal wiring sf -version, if it doesn't recognise this command, that means to haven't set salesforce cli in your path variable.
I'd use:
to_char(yourdate, 'DD') != '01'
Opening Project Properties, clicking onto "Java Compiler" on the left, then clicking onto "Java Build Path" hyperlink on the right, then clicking onto "Java Compiler" - "Building" (or anywhere else?) made Eclipse bring up a dialog asking "Build path was changed, update?". Clicking "Yes" fixed the problem.
Maven provides documentation that helped me here: https://maven.apache.org/settings.html
The settings element in the settings.xml file contains elements used to define values which configure Maven execution in various ways, like the pom.xml, but should not be bundled to any specific project, or distributed to an audience. These include values such as the local repository location, alternate remote repository servers, and authentication information.
There are two locations where a settings.xml file may live:- The Maven install: ${maven.home}/conf/settings.xml
- A user's install: ${user.home}/.m2/settings.xmlThe former settings.xml is also called global settings, the latter settings.xml is referred to as user settings. If both files exist, their contents get merged, with the user-specific settings.xml being dominant.
Turns out you can now use text-align-last: justify; to justify the last line of a paragraph!
Try localize this problem. If the problem is in PSQL driver or Spring dependences. Maybe the problem is in your OS.
Windows with postgres has an issue of localization and drivers: it's sending something through Power Shell or something like that and may turn some symbols into garbage (solved with switching system encoding to UTF-8), but '/' isn't a special symbol, you may get \0 symbol in your string.
Please, write more about your problem. What's your pom.xml (or gradle), about OS... Maybe you have strange getters/setters or ouput, or in your repository interface. Try searching all the path of this object with the field fileName. The problem may be hidding everywhere. Add more System.out.println(fileName);
in your @service and getters/setters. If it is cut in first setter problem is not in your code
The issue is very likely caused by this line in your HTML:
<button id="go" (click)="open(books)" ...>
Try add to robots.txt:
Disallow: /_next/static/chunks/app/
Source:
https://www.academicjobs.com/dyn/failing-dynamic-routes-in-next-js
If all you are trying to do is push your local commits, your best bet would be to stash your current working branch stages, push again, and then pop the stash.
git stash -u
git push
git stash pop
Since we popped the stash, we won't clutter up any local queues.
Yeah, it's possible to speed the training process of SpamAssassin's Sa-learn by using multiple CPUs. Even though Sa-learn doesn't normally support parallel execution, you can implement parallelism by simultaneously running several instances of Sa-learn, each using a distinct portion of your training dataset.
ADF is not using the correct service principle it looks like - you need to edit the settings in ADF to call the correct single user service principle
This will manually open links in a new tab
<div
dangerouslySetInnerHTML={{ __html: htmlData }}
onClick={(event) => {
const targetLink = event.target;
if (targetLink instanceof Element) {
const anchor = targetLink .closest('a');
if (anchor && anchor.href) {
event.preventDefault();
window.open(anchor.href, '_blank', 'noopener,noreferrer');
}
}
}}
/>
I think you need Repository="Google" default is "Maven" and these artifacts are hosted in Google's repo.
Do it:
python -c "import moviepy; print(moviepy._version_)"
And if your version is:
2.1.1
Use:
from moviepy import (
ImageClip,
TextClip,
CompositeVideoClip,
AudioFileClip,
concatenate_videoclips,
VideoFileClip
)
from moviepy.video.fx import Crop
Your case 3 is the typical interpretation. "Pure Prolog" is one where all rules are Horn clauses, i.e. conjunctions of goals. As you stated, monotonic Prolog programs do not use nonmonotonic features like negation. Pure monotonic Prolog combines both good practices.
See also: https://en.m.wikipedia.org/wiki/Prolog#Rules_and_facts
On a browser it is possible to use bare module names by using importmap. There is a MDN guide here with detailed information about modules in JavaScript.
Just like NSFAS supports remote learning access, consider Jitsi Meet (free/open-source) or BigBlueButton (Moodle-integrated) for live classes. Both fit tight budgets—ensuring education stays accessible, just like NSFAS does!
in some cases .htaccess causes problems such as 403 error.
review the file and remove any extra custom lines added before.
hint: backup your .htaccess file before any changes to it.
Goodluck
I like to use a Function or a Sub would work; theres going to be more interactions with user/validation, etc. Call it whatever, I like to use it debug.
'Java --> Alert Public Function Debug(message) Response.Write "alert('The value " & message & "');" End Function
DEBUG("IndivNo: " & mIndiv & " -- SSN: " & mSSN & " -- Update Flag: " & mUpdateFlag )
Replaced this:
'response.write "FLAG IS " & mUpdateFlag & "
"
'response.write "sIndiv IS " & mIndiv & "
"
'response.write "SSN IS " & mSSN & "
"
2025 - did not work for me. Did remove iOS folder and ran the flutter create . command but VScode reported 'Xcode unsupported option '-G' for target 'x86_64-apple-ios10.0-simulator'. iOS should 15.0 or higher. Trying to resolve that.
I wonder if one of the solutions wouldn't be also using something like :
<ng-container formGroupName="childGroup">
<input type="text" formControlName="childControl1">
</ng-container>
<input type="text" [formControl]="parentControl1">
<ng-container formGroupName="childGroup">
<input type="text" formControlName="childControl2">
</ng-container>
That way we would get ordering split. Though I am looking for answare if that is allowed to have separate 2 fromGroupName in same template. I am experimenting right now. What do you think guys?
I ran in the exact same issue and path... but without getting answers to your questions... I am on pi zero. w 2, did you get anywhere?
You cannot directly edit the reclaim policy since changing the StorageClass's reclaimPolicy will only affect newly created PVs. It will not change the persistentVolumeReclaimPolicy of your existing PVs.
The recommended approach is to create a new StorageClass with the "Retain" policy then migrate your existing PVCs using this command:
List the PersistentVolumes in your cluster:
kubectl get pv
Choose one of your PersistentVolumes and change its reclaim policy:
kubectl patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
Verify that your chosen PersistentVolume has the right policy:
kubectl get pv
For further reference you can refer to this documentation.
I don't know about an RTEMS CSP implementation. But maybe I can answer the rest of the questions: The tricky part is most likely CAN. There are only very few CAN drivers in RTEMS at the moment. For all other parts that are usually offered by an operating system like Zephyr or FreeRTOS, there should be equivalents. RTEMS also implements quite a lot of the POSIX API so that you most likely can re-use big parts of the Linux port.
You need to add
import { Autoplay } from "swiper/modules";
and use the module: modules={[Autoplay]}
This is only a partial answer but it makes some progress towards a faster way to find the required optimal expression (if it exists) which I think is now worth sharing. They are mostly heuristics to put bounds on the search parameters a,b,c
narrowing down the search window and a couple of new functional forms.
The most interesting new serendipitous discovery is that a single xor based pattern can apparently catch ~80% of known valid solutions (they would also be found by other rules). One rule to bind them all...
((x - c) ^ b) < a // -xor<
The objective is to reformulate the conditional range test.
((x >= A) && (x < B)) || (( x >= C) && (x < D))
where A < B < C < D and C > B+1 the expression can also be written more symmetrically:
(x >= A) ^ (x < B) ^ (x >= C) ^ (x < D)
I have made a full brute force attack on the entire range of valid possibilities word widths 3, 4, 5 & 6 and from those results it is possible to offer some heuristics and a couple of new canonical forms. It may be possible to do iterative deepening starting with these partial answers as cribs (work in progress).
@njuffa originally suggested:
((x & b) - c) < a // and-<
((x | b) - c) < a // or-<
((x ^ b) - c) < a // xor-<
My notation for the rules shows the logic function written longhand and arithmetic ones as themselves. I concur that `or` and `and` find equivalent solutions although one form might be preferred if one of a,b,c is zero. The corresponding equality tests do not create any new solutions (though might have a simpler form).
To get a feel for the patterns I histogrammed the frequency of occurrence of values ABCD in valid solutions. Brute force search of the entire parameter space is O(N^7)
so the problem gets out of hand quickly. The biggest problem is that to be sure of finding every solution all the failures have to be searched to exhaustion (very time consuming. I settled on a crude approximation that appear to get about 99% success rate.
These are summary results using Njuffa's original functions (I'll fill in gaps when I have results).
log2N | N | possible | solutions |
---|---|---|---|
3 | 8 | 35 | 10 |
4 | 16 | 1365 | 119 |
5 | 32 | 31465 | 852 |
6 | 64 | 595665 | ? |
7 | 128 | 10334625 | ? |
8 | 256 | 172061505 | ? |
testN 3 8 count = 35/4096 fraction = 0.008545 solutions 10
N A B C D
0 : 5 0 0 0
1 : 3 2 0 0
2 : 2 4 0 0
3 : 0 3 1 0
4 : 0 1 2 1
5 : 0 0 3 1
6 : 0 0 4 3
7 : 0 0 0 5
testN 4 16 count = 1365/65536 fraction = 0.020828 solutions 119
N A B C D
0 : 44 0 0 0
1 : 16 8 0 0
2 : 14 16 0 0
3 : 8 11 1 0
4 : 20 24 8 1
5 : 5 10 6 1
6 : 2 15 10 3
7 : 0 10 6 5
8 : 5 11 9 18
9 : 3 3 10 1
10 : 2 5 15 3
11 : 0 3 10 7
12 : 0 3 23 25
13 : 0 0 9 12
14 : 0 0 12 20
15 : 0 0 0 23
testN 5 32 count = 31465/1048576 fraction = 0.030007 solutions 852
[snip]
My canonical functions are:
bool mytarget(uint8_t x)
{
return ((x >= A) && (x < B)) || ((x >= C) && (x < D));
}
bool myproto_and(uint8_t x, unsigned a, unsigned b, unsigned c)
{
return ((x & b) - a) < c; // try a=0 first
}
We are in effect trying by bitwise manipulation create a trap against 0 or 255 so that both original target intervals alias onto the same narrow domain roughly of the form.
abs( x - e) < E
The heuristics I used to make everything go faster are fairly basic but seem to work bounds on ranges etc.
Crucially it accepts and prints the first solution found in FAST mode, but can also print all solutions or more usefully the first found and any where one of `a,b,c` are zero.
The prototype solutions are tested in the order of their frequency finding a solution. It turns out that one novel formula accounts for 85% of the solutions found and that almost all solutions appear to be findable using xor in combination with arithmetic operators. Only a tiny fraction require and/or. Table shows percentages to lg2N = 5.
-xor> | -xor< | xor-< | -and> |
---|---|---|---|
85 | 10 | 5 | <1 |
Additional heuristics on the initial values of the parameters a,b,c.
a and c are additive constants, b is a bitmask applied by a logical operator.
1. Try a = 0
first (this saves the subtraction if a solution exists)
2. Try c = (B-A)+(D-C)
first (seems to often be right)
3. If (B-A) == (D-C)
try also c = (B-A)
, c = 1
, or c= 9
Empirically it seems that rules 2 & 3 taken together cover all of the solutions for c \< N/2
.
Hack here. Try b = c = 255 - (B-A)-(D-C)-11
and loop on c++ until c==0
, and loop on b++ all 256 states.
Originally loops were over A
, B
, C
, D
in strict order. But looking at the solution tables there is merit altering it so that the widest region where a solution might most likely be found is probed first.
A = 0 ; A < N-5; A++
D = 255; D > A+4; D--
B = A+1; A < D-2; B++
C = D-1; C > B+1; C--
FAST mode accepts the first solution found but still probes all failing cases to exhaustion.
CRUDE mode assumes that if no solution found for A=0
, D=N-1
and any B,C
means no solutions. (still experimental and it breaks the total count)
I am worried by the asymmetry of the table. `0` and `1` symbols feel like they should be interchangeable. So the deficit of entries in the higher half of the table (and maybe the lower half too) must be due to missing solutions not found by the set of rules presently being used.
I realise that is too simplistic since the bits have a natural ordering in a number.
Looking for new rules to find the missing cases is worthwhile.
In addition I confirm that `AND &` vs `OR |` achieve essentially the same results. `XOR ^` gets some new ones. None of the other operator combos found anything new. So search focusses on just that pair.
In the search for new simple expressions to fatten up the solutions table I stumbled upon the following.
return (((x) - a) ^ b) \< c; // -xor\<
return (((x) - a) ^ b) \> c; // -xor\>
Swapping the order of operations generates a very decent crop of new solutions previously missing. In fact -xor< aka `xor_lt` finds almost as many solutions as `and` and `or` put together! There may be other fast functional forms using "~", ">>" or "*" that might also get some more. That is possibly another question entirely.
This is a table of how the latest code performs with the additional xor based tests added (which slows it down somewhat). There are hints that `xor_gt` becomes more important as the number of bits increases.
`xor` feels like it ought to be optimal for this trick since it is information preserving. The application of the functions has been ordered to maximise early cutoffs. New candidate functions are added to the end of the list.
lg2N | N | count | solutions | time/s | and | xor | xor_lt | xor_gt |
---|---|---|---|---|---|---|---|---|
3 | 8 | 35 | 19 | 6 | 5 | 5 | 0 | 9 |
4 | 16 | 1365 | 321 | 404 | 53 | 78 | 49 | |
5 | 32 | 31465 | 3040 | 15807 | 385 | 622 | 825 | 1199 |
6 | 64 | \>150000 |
The crude approximation to avoid exhaustive searches on hopeless cases. This risks missing some solutions but seems not to at least on the range of N that I have been able to test fully.
The vast majority will be fails. There are only about 172 million possible solutions for 8 bits out of a state space 2^32 ~= 4 billion.
Trading accuracy for speed with the lastest gofaster stripes all applied gives
lg2N | solutions | time /s |
---|---|---|
3 | 19 | 0 |
4 | 318 | 10 |
5 | 2769 | 147 |
6 | 21400 | 75690 |
Adding one extra functional match
return ((x + a) ^ b) \< c; // +xor\<
increased those scores to 20, 352, 3366 & 24842 respectively.
I'm still looking for any other functions that add to the score. This is the unpolished experimental C sourcecode for the brute force search. It iterates from bit depth 3 to 6. Run time of anything beyond 32
or lg2N = 5
is huge ~10-12 hours for 64
(beyond that impossible without a cluster).
#include <iostream>
#include <time.h>
unsigned int A, B, C, D, N; // globals used by mytarget and brute force
bool mytarget(uint8_t x)
{
return ((x >= A) && (x < B)) || ((x >= C) && (x < D));
}
#define DEBUGPRINT (1)
#define CASE (1)
#define FAST (1)
//#define CRUDE (1)
bool symmetric_form(uint8_t x, unsigned a, unsigned b, unsigned c)
{
return (x >= A) ^ (x < B) ^ (x >= C) ^ (x < D); // naughty
}
// convention used
// a is the target for comparisons
// b is the bitmask
// c is the add/subtract constant
// return ((x [^&|] b) [-+*>>] c) [<=>] a;
// return ((x [-+*>>] c) [^&|] b) [<=>] a;
//
// heuristic for 'a' almost works - order of magnitude faster but misses 3 solutions out of 321 for bits 4
bool myproto_xor(uint8_t x, unsigned a, unsigned b, unsigned c)
{
return ((x ^ b) - c) < a;
}
bool myproto_xor_lt(uint8_t x, unsigned a, unsigned b, unsigned c)
{
return ((x - c) ^ b) < a;
}
bool myproto_xor_gt(uint8_t x, unsigned a, unsigned b, unsigned c)
{
return ((x - c) ^ b) > a;
}
bool myproto_plus_xor_lt(uint8_t x, unsigned a, unsigned b, unsigned c)
{
return ((x + c) ^ b) < a;
}
bool myproto_xor_eq(uint8_t x, unsigned a, unsigned b, unsigned c)
{
return ((x ^ b) - c) == a;
}
bool myproto_xor_and(uint8_t x, unsigned a, unsigned b, unsigned c)
{
return ((x ^ b) & b) == a;
}
bool myproto_and_xor(uint8_t x, unsigned a, unsigned b, unsigned c)
{
return ((x & b) ^ c) == a;
}
bool myproto_and(uint8_t x, unsigned a, unsigned b, unsigned c)
{
return ((x & b) - c) < a;
}
bool myproto_and_lt(uint8_t x, unsigned a, unsigned b, unsigned c)
{
return ((x - c) & b) < a;
}
bool myproto_and_gt(uint8_t x, unsigned a, unsigned b, unsigned c)
{
return ((x - c) & b) > a;
}
bool myproto_and_eq(uint8_t x, unsigned a, unsigned b, unsigned c)
{
return ((x & b) - c) == a;
}
bool myproto_or(uint8_t x, unsigned a, unsigned b, unsigned c)
{
return ((x | b) - c) < a;
}
bool myproto_or_eq(uint8_t x, unsigned a, unsigned b, unsigned c)
{
return ((x | b) - c) == a;
}
unsigned bruteforce_bc(unsigned a, const char* name, bool (*target)(uint8_t), bool (*proto)(uint8_t, unsigned, unsigned, unsigned), bool verbose = false)
{
unsigned b, b0, c, c0, pass = 0;
b0 = B - A + D - C+11;
b = 255 - b0;
b = b0 = 0;
// for (b = 0; b < 256; b++)
do
{
// for (c = 0; c < 256; c++)
c0 = B - A + D - C + 10;
c = 255-c0; // C - B;
c = c0 = 0;
do
{
uint8_t x = 0;
bool ref, res;
do
{
ref = target(x);
res = proto(x, a, b, c & 0xff);
if (res != ref) break;
} while (++x);
if ((ref == res) && (x == 0))
{
pass++;
#if DEBUGPRINT
if (verbose || (pass == 1) || (a == 0) || (b == 0) || c == 0)
if (DEBUGPRINT) printf("\nABCD %3u %3u %3u %3u [%02x,%02x,%02x,%02x] %s %3i abc %3u %3u %3u [%02x,%02x,%02x]", A, B, C, D, A, B, C, D, name, pass, a, b, c, a, b, c);
#endif
#if FAST==1
return pass;
#endif
}
c = (c + 1) & 0xff;
} while (c != c0);
b = (b + 1) & 0xff;
} while (b != b0);
return pass;
}
unsigned bruteforce(const char *name, bool (*target)(uint8_t), bool (*proto)(uint8_t, unsigned, unsigned, unsigned), bool verbose=false)
{
unsigned a, a0, pass = 0;
// if(DEBUGPRINT) printf("\n function %s ", name);
a0 = a = (B - A) + (D - C);
if (bruteforce_bc(a, name, target, proto)) return 1;
if ((B - A) == (D - C))
{
if (bruteforce_bc(a >> 1, name, target, proto)) return 1;
if (bruteforce_bc(1, name, target, proto)) return 1;
}
a = 256 - a;
// a = 0; // reinstate this line for full brute force mode
do
{
if (bruteforce_bc(a, name, target, proto)) return 1;
#ifdef CRUDE
break;
#endif
} while (((++a) & 0xff) != 0); // caution ad hoc may need tweaking
#if (DEBUGPRINT)
if (pass) printf("\n %s has %i solutions\n", name, pass);
#endif
return pass;
}
unsigned tryall(bool (*target)(uint8_t))
{
unsigned pass = 0;
pass += bruteforce("-xor>", target, myproto_xor_gt);
if (FAST && pass) return pass;
pass += bruteforce("-xor<", target, myproto_xor_lt);
if (FAST && pass) return pass;
pass += bruteforce("xor-<", target, myproto_xor);
if (FAST && pass) return pass;
pass += bruteforce("-and>", target, myproto_and_lt);
if (FAST && pass) return pass;
pass += bruteforce("+xor<", target, myproto_plus_xor_lt);
if (FAST && pass) return pass;
// insert new prototype to test here
// pass += bruteforce("-xor=", target, myproto_xor_eq);
// if (FAST && pass) return pass;
return pass;
}
void testN(int lgN)
{
unsigned int i, j, k, l;
unsigned long long N4 = N, count = 0, passcount =0;
unsigned hist[4][256]{0};
time_t start, end;
N = 1 << lgN;
N4 *= N4;
N4 *= N4;
printf("testN %i %u\n", lgN, N);
if (FAST) bruteforce("symmetric xor", mytarget, symmetric_form);
start = time(NULL);
for (i = 0; i < N - 4; i++)
{
A = i;
for (l = N-1; l >= A+3; l--)
{
D = l;
for (j = i + 1; j < l - 1; j++)
{
B = j;
for (k = j + 2; k < l; k++)
{
unsigned pass;
C = k;
count++;
// if (lgN < 5) printf("\n%I64u: ABCD[ %x %x %x %x ] #", count, i, j, k, l);
pass = tryall(mytarget);
if (pass)
{
hist[0][i]++;
hist[1][j]++;
hist[2][k]++;
hist[3][l]++;
passcount++;
}
}
}
}
}
end = time(NULL);
printf("\ncount = %llu/%llu fraction = %f solutions %llu in %llu seconds\n", count, N4, ((float) count)/N4, passcount, end-start);
printf(" N\tA\tB\tC\tD\n");
for (int i = 0; i < 256; i++)
{
unsigned sum=0;
for (int j = 0; j < 4; j++) sum += hist[j][i];
if (sum)
{
printf("\n%3i : ", i);
for (int j = 0; j < 4; j++) printf(" %4i ", hist[j][i]);
}
}
printf("\n");
}
int main()
{
for (int i = 3; i < 8; i++) testN(i);
}
Testing to bit depth 5 seems to be sufficient to find and validate new rules effectiveness (and fairly quick ~3 mins on an i5-12600). Suggestions for improvements to speed and accuracy and any new rules that find additional valid solutions are welcome.
I am working with the .Net CDK, but faced the same problem.
I was able to resolve it by using Bucket.FromBucketArn() to import the bucket into the stack instead of referencing it directly, as suggested in: https://github.com/aws/aws-cdk/issues/11245
If you haven't already, you should format your form response sheet as a Sheets Table.
Once that's complete, you can quickly group-by the batch number and all the rows with the same batch number will be grouped together. You can even adjust your aggregate values see the time between start and finish.
Save this grouped view for quick reference through the table's dropdown menu.
I am also getting this error.. I am also using scriptType: 'pscore'
. I will make an update and report back. If pscore
isn't available in a windows agent, I would think that is a bug. In my templates i use this so that it can work in both windows or linux agent pools.
edit: i know i posted this as an asnwer, if anyone feels it should be a comment to the ubuntu answer.. let me know.
So, If it annoys you, you could just leave some space-es on top and write your code under it. Sadly, there is no way to do it in settings.
Just in case, -loop option must be indicated after input (-i foo)
I have the exact same code (not including the ObBnClickedButton1) for an About menu item to pop up an About modal dialog from the main program dialog window. I see the ID value coming from the system is 0XF095. That is AFX_IDS_UNKNOWNTYPE from <afxres.h>. If I comment out the check for the menu in this method then the about dialog does display properly. So how is the system sending me an UNKNOWNTYPE?
//A selection from the menu (Just "About" for now)
void CguiDlg::OnSysCommand(UINT nID, LPARAM lParam)
{
//the About menu item was selected and this should popup the About Dialog here
if ((nID & 0xFFF0) == IDM_ABOUTBOX)
{
CAboutDlg dlgAbout;
dlgAbout.DoModal();
}
else
{
CDialogEx::OnSysCommand(nID, lParam);
}
}
I implemented one solution HTML doesn't include target="_blank" on the anchor tags.
so, we need to ensure all anchor tags in the HTML string have target="_blank" and optionally rel="noopener noreferrer" added to them before rendering.
function addTargetBlank(htmlString) {
const parser = new DOMParser();
const docData = parser.parseFromString(htmlString, 'text/html');
const links = docData.querySelectorAll('a[href]');
links.forEach(link => {
link.setAttribute('target', '_blank');
link.setAttribute('rel', 'noopener noreferrer');
});
return docData .body.innerHTML;
}`enter code here`
const Html = addTargetBlank(rawHtmlString);
const showNewHtml = DOMPurify.sanitize(newHtml );
The __subclasses__()
feature of Python can be totally disabled by using no-subclasses library, which can be installed via pip install no-subclasses
.
An example:
>>> import no_subclasses
>>> len(object.__subclasses__()) # Without no_subclasses library
313
>>> object.__subclasses__()[:5]
[<class 'type'>, <class 'async_generator'>, <class 'bytearray_iterator'>, <class 'bytearray'>, <class 'bytes_iterator'>]
>>>
>>> no_subclasses.init() # Enable no_subclasses library
>>> object.__subclasses__()
[]
>>> int.__subclasses__()
[]
>>> attack_expr = "(1).__class__.__base__.__subclasses__()"
>>> safe_scope = {"__builtins__":{}} # Cannot call any built-in functions (except some safe functions, e.g. (1).__class__)
>>> eval(attack_expr,safe_scope)
[]
Additionally, I've submitted an issue proposing the deprecation of the __subclasses__
feature to CPython, but it was rejected.
Note that I'm the developer of no-subclasses library.
seems the answer was to add opens db.migration;
to the module-info.java file
it had the right path and could find the file - adding this allowed it to run the migration files.
Wish I understood the module-info.java file more and not have to just rely on google to stumble into an answer.
Both are different status codes:
SalesForce API failed (Unauthorized):
a:1:{i:0;O:8:"stdClass":2:{s:7:"message";s:26:"Session expired or invalid";s:9:"errorCode";s:18:"INVALID_SESSION_ID";}} - This one is Status Code: 401 which indicates that the API request lacks valid authentication credentials. This usually means the API user hasn't been granted the necessary permissions to access the requested resource. Related Link - https://help.salesforce.com/s/articleView?id=001122773&type=1
SalesForce API failed (Status: 400)
a:2:{s:5:"error";s:13:"invalid_grant";s:17:"error_description";s:22:"authentication failure";} -
This one is Status Code: 400 which usually indicates that the API request did not have the necessary authentication or authorization to access the requested resource. This can happen for a variety of reasons, including invalid credentials, expired sessions, or insufficient permissions. Related link - https://help.salesforce.com/s/articleView?id=001122773&type=1
Since all nuitka/cython programs rely on python3x.dll or libpython3.x.so, they all store low-level Python object structures (PyObject *) in memory, so they can be easily injected by using DLL injection tools.
On the other hand, all Python objects have magic methods (e.g., __add__
). Therefore, if I create a proxy object that wraps a target object, whenever any magic method of the proxy is called, the proxy logs the call, invokes the same method on the target object, returns the result as a normal object, and recursively proxies the resulting object, raw Python codes can be generated from these call logs.
This is the principle of my pyobject.objproxy library and the high-level wrapper pymodhook, which could be installed via pip install pymodhook
.
An example hooking numpy
and matplotlib
:
from pyobject import ObjChain
chain = ObjChain(export_attrs=["__array_struct__"])
np = chain.new_object("import numpy as np", "np")
plt = chain.new_object("import matplotlib.pyplot as plt", "plt",
export_funcs=["show"])
# Testing the pseudo numpy and matplotlib modules
arr = np.array(range(1, 11))
arr_squared = arr ** 2
print(np.mean(arr))
plt.plot(arr, arr_squared)
plt.show()
# Display the auto-generated code calling numpy and matplotlib libraries
print(f"Code:\n{chain.get_code()}\n")
print(f"Optimized:\n{chain.get_optimized_code()}")
The output:
Code: # Unoptimized code that contains all detailed access records for objects
import numpy as np
import matplotlib.pyplot as plt
var0 = np.array
var1 = var0(range(1, 11))
var2 = var1 ** 2
var3 = np.mean
var4 = var3(var1)
var5 = var1.mean
var6 = var5(axis=None, dtype=None, out=None)
ex_var7 = str(var4)
var8 = plt.plot
var9 = var8(var1, var2)
var10 = var1.to_numpy
var11 = var1.values
var12 = var1.shape
var13 = var1.ndim
...
var81 = var67.__array_struct__
ex_var82 = iter(var70)
ex_var83 = iter(var70)
var84 = var70.mask
var85 = var70.__array_struct__
var86 = plt.show
var87 = var86()
Optimized: # Optimized code
import numpy as np
import matplotlib.pyplot as plt
var1 = np.array(range(1, 11))
plt.plot(var1, var1 ** 2)
plt.show()
Though the code from raw call logs is cluttered, like that generated by IDA Pro, it could be optimized via a DAG algorithm (see details in README.md).
Additionally, programs hooked by my current pyobject.objproxy
library run 40x slower than normal (as measured by python -m pyobject.tests.test_objproxy_perf
).
For DLL injection, the injected DLL first searches for loaded DLLs from python31.dll
to python332.dll
(e.g. python313.dll
), then calls PyImport_ImportModule("__hook__")
. This requires __hook__.py
and other modules to be placed in the same directory as the EXE beforehand.
For usage instructions of this toolchain, see the README.md of the pymodhook
library.
Note: I am the developer of this reverse engineering toolchain.
The Dialog
has the open
property, and it depends on the open
value from handleDialog
. You need to add the open
value as true
to render the Dialog
component. Add it on modifiedProps
in your test.
As you have correctly found, \underline
, is not a currently supported MathText command. But, matplotlib's MathText is not the same a LaTeX. To instead use LaTeX, you can do, e.g.,
import matplotlib.pyplot as plt
# turn on use of LaTeX rather than MathText
plt.rcParams["text.usetex"] = True
plt.text(.5, .5, r'Some $\underline{underlined}$ text')
plt.show()
You may have issues if your tex distribution does not ship with the type1cm
package, in which can you may want to look at, e.g., https://stackoverflow.com/a/37218925/1862861.
do you have a working solution for the xerox accounting. I want to build my own and my new copiers don't use the same account/jbaserve login and can't find any info. Thanks
You saved my like, I couldn't figure out why mine was failing with 403 error. I too was including the colons ":". Removing them got me cleared. Thank you.
Did you solve this?, I have the same issue
I was having this same issue with a fresh install of Verdaccio version 6.1.2. As hinted at by @juanpicado in the comments, this seems to be a problem when you have previously published a package to an upstream registry, such as npmjs.org.
To get around this, you need to change the version number in your package.json file to a version that doesn't exist on an upstream registry.
I could solve the problem by creating simple arrays using numpy
for surface plotting and switching to go.Surface
from Mesh3D
. Any solutions concerning the latter are still welcomed.
The complete code:
import pandas as pd
import plotly.graph_objects as go
import numpy as np
data = pd.read_excel(r"C:\Users\canoe\OneDrive\Asztali gép\Malin study (biathlon)\course profile plot_male.xlsx", sheet_name='Munka1')
x = data['x'].values
y = data['y'].values
z = data['alt'].values
i = data['incl'].values
X = np.vstack([x, x])
Y = np.vstack([y, y])
Z = np.vstack([[350] * len(x), z -0.1])
I = np.vstack([i, i])
fig = go.Figure()
fig.add_trace(go.Scatter3d(
x=x,
y=y,
z=z,
mode='lines',
line=dict(
color=i,
width=13,
colorscale='Jet',
colorbar=dict(
title=dict(
text='Incline [deg]',
font=dict(size=14, color='black')
),
thickness=20,
len=0.6,
tickfont=dict(size=12, color='black'),
tickmode='linear',
tickformat='.2f',
outlinewidth=1,
outlinecolor='black'
)
)
))
for j in range(0, len(x), 20):
color_value= i[j]
fig.add_trace(go.Surface(
z=Z,
x=X,
y=Y,
surfacecolor=I,
colorscale='jet',
cmin=min(i), cmax=max(i),
showscale=False,
opacity=0.7
))
fig.add_trace(go.Scatter3d(
x=x,
y=y,
z=[350] * len(x),
mode='lines',
line=dict(color='black', width=3),
name="Profile Curve"
))
for j in range(0, len(x), 20):
fig.add_trace(go.Scatter3d(
x=[x[j], x[j]],
y=[y[j], y[j]],
z=[350, z[j]],
mode='lines',
line=dict(color='dimgray', width=2),
opacity=0.6,
showlegend=False
))
fig.update_layout(
title="3D Course Profile Colored by Incline",
template='plotly_white',
scene=dict(
xaxis=dict(title='Meters north from start position', showgrid=True, zeroline=False),
yaxis=dict(title='Meters east from start position', showgrid=True, zeroline=False),
zaxis=dict(title='Altitude [m]', showgrid=True, zeroline=False),
aspectmode='manual',
aspectratio=dict(x=1.25, y=1, z=0.7)
),
margin=dict(l=0, r=0, b=0, t=50)
)
fig.show()
Version nr.2 using matplotlib.pyplot
:
I post a simpler solution using matplotlib.pyplot
but without edge colouring:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib import cm
import matplotlib.ticker as ticker
data = pd.read_excel(r'')
X_1d = data['x'].values
Y_1d = data['y'].values
Z_1d = data['alt'].values
I_1d = data['incl'].values
fig = plt.figure(figsize=(8, 5))
ax = fig.add_subplot(111, projection='3d')
X = np.vstack([X_1d, X_1d])
Y = np.vstack([Y_1d, Y_1d])
Z = np.vstack([Z_1d, Z_1d + (max(Z_1d)-min(Z_1d))]) #[350]*len(X_1d), Z_1d + 50 (and below twice)
I = np.vstack([I_1d, I_1d])
norm = plt.Normalize(I.min(), I.max())
colors = cm.jet(norm(I))
fig = plt.figure(figsize=(8, 5))
ax = fig.add_subplot(111, projection='3d')
ax.plot(X_1d, Y_1d, Z_1d + (max(Z_1d)-min(Z_1d)), color='black', linewidth=2)
surf = ax.plot_surface(X, Y, Z, facecolors=cm.jet(norm(I)), rstride=1, cstride=1, linewidth=0, antialiased=True, alpha=0.7)
ax.plot_wireframe(X, Y, Z, color='k', linewidth=1, alpha=1.0)
mappable = cm.ScalarMappable(norm=norm, cmap=cm.jet)
mappable.set_array(I)
cbar = fig.colorbar(mappable, ax=ax, shrink=0.5, alpha=0.7, aspect=5, extend='both', extendrect=True, spacing='proportional', format=ticker.FormatStrFormatter('%.1f'))
cbar.set_label('Incline [deg]', rotation=270, labelpad=15)
cbar.ax.axhline(max(I_1d), color='black', linewidth=1.2, linestyle='--')
cbar.ax.axhline(min(I_1d), color='black', linewidth=1.2, linestyle='--')
ax.set_xticks([])
ax.set_yticks([])
ax.set_zticks([])
ax.xaxis.line.set_color((0.0, 0.0, 0.0, 0.0))
ax.yaxis.line.set_color((0.0, 0.0, 0.0, 0.0))
ax.zaxis.line.set_color((0.0, 0.0, 0.0, 0.0))
ax.xaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.yaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
ax.zaxis.set_pane_color((1.0, 1.0, 1.0, 0.0))
plt.tight_layout()
plt.show()
Finally, I found where the problem is. There is a sendBackgroundApiCommand() implementation in public abstract class AbstractEslClientHandler extends SimpleChannelInboundHandler<EslMessage>. The NPE happens when thenComposeAsync() is being called and the reason for that is callbackExecutor. not initialized.
public CompletableFuture<EslEvent> sendBackgroundApiCommand(Channel channel, final String command) {
return sendApiSingleLineCommand(channel, command)
.thenComposeAsync(result -> {
// some code here
}
}, callbackExecutor);
}
I added intialization to a child class like below and the problem gone.
class InboundClientHandler extends AbstractEslClientHandler {
// ...
public InboundClientHandler(String password, IEslProtocolListener listener) {
this.password = password;
this.listener = listener;
this.callbackExecutor = Executors.newSingleThreadExecutor();
}
// ...
}
I copied and pasted your code and it gives me this warning.
import math import pygame class MarbelSprite(pygame.sprite.Sprite): def _init_(self, x, ground, diameter, velocity, filename): pygame.sprite.Sprite._init_(self) try: self.image = pygame.transform.smoothscale(pygame.image.load(filename).convert_alpha(), (diameter, diameter)) except: self.image = pygame.Surface((diameter, diameter), pygame.SRCALPHA) pygame.draw.circle(self.image, (255, 128, 0), (diameter // 2, diameter // 2), diameter // 2) self.original_image = self.image self.rect = self.image.get_rect(midbottom = (x, ground)) self.diameter = diameter self.x = x self.velocity = velocity self.move_x = 0 self.follow = None self.angle = 0 def update(self, time, restriction): move_x = 0 prev_x = self.x if self.move_x != 0: move_x = self.move_x * self.velocity * time elif self.follow: dx = self.follow.rect.centerx - self.x move_x = (-1 if dx < 0 else 1) * min(self.velocity * time, abs(dx)) self.x += move_x self.x = max(restriction.left + self.diameter // 2, min(restriction.right - self.diameter // 2, self.x)) self.rect.centerx = round(self.x) self.angle -= (self.x - prev_x) / self.diameter * 180 / math.pi self.image = pygame.transform.rotate(self.original_image, self.angle) self.rect = self.image.get_rect(center = self.rect.center) pygame.init() window = pygame.display.set_mode((500, 300)) clock = pygame.time.Clock() ground_level = 220 object = MarbelSprite(window.get_rect().centerx, ground_level, 100, 0.4, 'BaskteBall64.png') follower = MarbelSprite(window.get_width() // 4, ground_level, 50, 0.2, 'TennisBall64.png') all_sprites = pygame.sprite.Group([object, follower]) run = True while run: time = clock.tick(60) for events in pygame.event.get(): if events.type == pygame.QUIT: run = False keys = pygame.key.get_pressed() object.move_x = keys[pygame.K_RIGHT] - keys[pygame.K_LEFT] follower.follow = object all_sprites.update(time, window.get_rect()) window.fill((32, 64, 224)) pygame.draw.rect(window, (80, 64, 64), (0, ground_level, window.get_width(), window.get_height()-ground_level)) all_sprites.draw(window) pygame.display.flip() pygame.quit() exit()
How does this happen??