I suggest to use the Qt::mightBeRichText function.
As the official Qt documentation states:
Returns
trueif the string text is likely to be rich text; otherwise returnsfalse.
Although it might detect if it is rich text, in my opinion it's an accurate approach.
I cannot color your cells but we can at least beautify them. You can directly use HTML tags <fieldset>,<legend> and <center> and <b> to make text boxes with their own special little headings that are bold, with centered text inside as well. It looks very nice, no running of code is needed. Unfortunately, no colors. Yet.
Not aware of any method to do this automatically, and given that AMIs don't have a kubernetes attribute the only way is either via tags if your using custom amis and/or by name. For example you could extend your ami selector like so:
amiSelectorTerms:
- name: "amazon-eks-node-1.30-*"
set the version of nuxt SEO to 2.1.1 for now, they havent even update the documentation
pure logic:
first: get only 'day' part of 'date' cmd. save it.
then: increment it by one, check overflow(february or march) and apply with 'overflow fix' if needed.
final: put new value back to 'date format'
can you specify please what do you mean by generating your swagger doc using javadocs instead of swagger annotations
Currently Docker hub is only supported to use registry mirror to my knowledge as I don't see any mention of it on the ECR documentation. I have found a feature request for you're asking for
So.. just in case any else comes here like me.. We used to have a profiler in android studio
View> Tool Window > Profiler..
and it would show you all the activities as you move throughout the app.. In Meerkat, its a bit more hidden, but still there,
Run the app in the emulator
open profiler
select Find CPU Hotspots
click btn Start anyway
use ur app..
Then u will get the recording of the activities used.
If you're still facing this error on browser-side:
you have to specify type="module" on the script tag
<script type="module" src="main.js"></script>
You are getting this error because you haven't installed Pandas. While you might have Pandas installed in Anaconda (which is why it works there), to work in IDE you'll need to install Pandas through pip install pandas or similar.
Check Hibernate SQL Output
Enable SQL query logging to see what queries Hibernate is executing:
Add this to application.properties:
spring.jpa.show-sql=true
spring.jpa.properties.hibernate.format_sql=true
Check the console logs when making the GET request. If Hibernate is querying player_stats instead of "Player_Stats", it means case sensitivity is the problem.
Ensure Entity Scanning Is Enabled
This ensures Spring Boot recognizes Player as a valid entity.
Try changing to @Service on PlayerService and check
PostgreSQL treats unquoted table names as lowercase by default. Your entity is mapped to "Player_Stats" (with mixed case), but Spring Boot, by default, generates queries using lowercase (player_stats).
try : " from moviepy import VideoFileClip" instead of "from moviepy.editor import VideoFileClip". It works.
According to the OP in the comments:
The macs I was working on had python 2.7 installed; I wrote the program on a windows machine using python 3.7. To solve the problem, I installed python3, used
pip3 install pandas numpy(since pip by itself was installing the modules to 2.7), and thenpip3 install xlrd.
You have 2 options to copy tables from one storage account to another.
Use Azure storage explorer - https://learn.microsoft.com/en-us/azure/storage/storage-explorer/vs-azure-tools-storage-manage-with-storage-explorer?tabs=windows
Build Azure data factory pipelines
If you are doing this just once Azure storage explorer is best but use ADF pipelines it this is repetitive exercise.
No extra work needed. As long as reCAPTCHA is enabled and properly configured in the admin panel, Magento automatically takes care of validating the response before processing the registration. If the reCAPTCHA fails, the form submission is rejected.
I had a similar issue when trying to use a grid loop in my product gallery, I added the following script to html widget above my grid loop:
<script>
jQuery(document).ready(function($) {
// Add a common slideshow ID to all lightbox images
$('[data-elementor-open-lightbox="yes"]').attr('data-elementor-lightbox-slideshow', 'products-gallery');
});
</script>
it only seemed to work if I had a gallery widget on the page, so i just left an empty gallery widget for now. Haven't had the chance to discover why but if anyone knows why, let me know! I'll take a look when I get a chance.
Hope this helps someone
If you haven't already, you need to install Pandas, through pip install pandas.
While drafting this question, I played around on codepen and found this solution to fix the rendering for Chromium browsers by using two separate filters:
img {
filter: drop-shadow(1px 1px 0 black)
drop-shadow(0px 1px 0 black)
drop-shadow(1px -1px 0 black)
drop-shadow(-1px 0px 0 black);
}
@-moz-document url-prefix() {
img {
filter: drop-shadow(1px 1px 0 black)
drop-shadow(-1px 1px 0 black)
drop-shadow(1px -1px 0 black)
drop-shadow(-1px -1px 0 black);
}
}
Firefox:
Chrome:
However, I still don't understand why this rendering difference occurs, outside of the obvious—that they implement svg rendering differently.
You try running python -m ensurepip, which should install pip correctly.
For me the easiest is to press Ctrl + F5 (Linux/Windows) or Ctrl + Shift +R (Mac).
As a developer sometimes I want to check speed and optimization, so disabling Browser caching doesn't work for me.
There's a good chance the data is outdated (in my experience those reports aren't updated very frequently, especially if the URLs aren't regularly crawled). I can't see any noindex directive via my browser (server-side HTML, rendered HTML, or HTTP headers) so this is probably the case assuming your website isn't serving differently depending on user-agent/IP.
As Tony suggested, try inspecting an example URL in GSC and see if it reports the noindex in a live test. If not, validating the issue in GSC should "fix" it although it might take some time for all of them to be re-crawled and indexed.
Failing that, you'll probably have better luck with asking the GSC Help Community. try it GSC Help Community
This is a bug and is currently fixed. If you upgrade Pandas through pip install --upgrade pandas, this issue will be solved.
According to the OP, version 0.24.2 or later is necessary to avoid this issue. So updating Pandas, through pip install --upgrade pandas or similar will solve this.
For me the soln was to use path.join
entities: [__dirname + '/../**/*.entity{.ts}'],
TO
entities: [path.join(__dirname, '../**/*.entity.ts')],
Nuxt merges body classes by default. To change this behaviour use tagDuplicateStrategy key:
useHead({
bodyClass: {
tagDuplicateStrategy: 'replace',
class: `page--${data.bodyClass}`,
},
})
Read more: https://unhead.unjs.io/docs/guides/handling-duplicates#tagduplicatestrategy
I had the same in joomla i removed this line from phpmailer
if (!$result) {
throw new Exception($this->lang('instantiate'), self::STOP_CRITICAL);
}
Atlassian Support here! The idea with code insights reports is that automated systems such as the likes of SonarQube etc are able to create reports via following set of REST API endpoints after an analysis of a commit's code has been performed:
While many solutions have out-of-the-box integrations that allow them to pretty seamlessly create their reports, it's of course possible to develop a form of automation that creates your own custom reports. In case you don't already have it handy, here's a link to a tutorial on how you'd be able to do that:
I also had this problem. One simple fix I found was after creating and activating the virtual environment I can enter:
conda update python
It does not change any packages and recognizes that python is up-to-date. Afterwards, it then correctly points to the conda environment version of python rather than /usr/local/bin/python.
So I had a coffee and found the a solution. This will create the array of values dependant on x and y.
import numpy as np
x = 5
y = 100
inputArray = np.zeros(x)
squares = [list(inputArray+i) for i in range(y)]
values = ','.join([f"{j}" for j in squares]) ```
The Simple Plugin Example and Consumer API OIDC Example both use PKCE.
However, the former has handling in it to function nicely as an educational example for plugins (hence the name), whereas the latter does not have functionality in it to be a proper plugin (and instead is useful more for diving deeper into using the Consumer API, similar to what the Data Aggregators do with the Consumer API).
I think it may be that the clipboard hijacking or malware has tampered with the address content on your or your friend's device. In addition, it may be a bug in the Telegram client that causes the display error. Another possibility is that an error occurred during the copy and paste process, causing the address to be displayed incorrectly.
You need to check whether the device has malware or whether there is a cache problem in the Telegram app. It is recommended that you and your friend reinstall Telegram, update to the latest version, and clear the cache.
from my side I have been trying to make SIM7080G to connect to AWS IoT Core but it was really useless because my firmware version did not support the MQTT(S) protocol that AWS IoT Core requires to connect with your device.
Solution 1 is to try to update the firmware of your SIM7080G to the latest that supports MQTT(S).
Solution 2 is what I followed. Create an EC2 that hosts Mosquitto and a script to bridge the 2 layers the device and AWS IoT Core.
Device sends MQTT 1883 - > EC2 Mosquitto forward with MQTTS 8883 -> AWS IoT Core receives
AWS IoT Core sends MQTTS 8883 -> EC Mosquitto forwards with MQTT 1883 -> Device MQTT 1883 receives
This can be fixed by updating Pandas with pip install --upgrade pandas.
If this, when you attempt this with an updated Pandas version, gives you ImportError: cannot import name 'find_stack_level' from 'pandas.util._exceptions', then use pip install pandas==1.3.2 to avoid that error as well.
In Python stdlib v3.6, there is a module called secrets. Your line of script will try to look for IEX_CLOUD_API_TOKEN in there and definitely it won't find it.
Consider changing your file to something else like my_secrets.py then the following command should work:
from my_secrets import IEX_CLOUD_API_TOKEN
I'm Alex, the Developer Relations Program Manager for Apigee.
For your question, it might be beneficial to ask it in the Apigee forum where our experts can provide more tailored assistance. You can find it here: https://www.googlecloudcommunity.com/gc/Apigee/bd-p/cloud-apigee
In case you're interested, we're also hosting a session tomorrow, March 13th at 4:00 PM CET (11:00 AM EDT) on the Apigee Migration Assessment tool, feel free to register here: https://rsvp.withgoogle.com/events/apigee-emea-office-hours-2024
This is a UI bug on the App Engine side and should not be a concern since no versions are actually affected.
This happens when you:
Recently deleted the default version.
Recently upgraded from the legacy runtime version.
The server is still reading the internal information, which is causing a difference in the UI and rendering the warning message "End of support runtimes” and should be automatically refreshed after a week.
Since this was observed 10 days ago, is it still showing the error? If so, you can file an issue against the App Engine team so they can provide detailed troubleshooting.
It could also be that you are running it from the Downloads folder as opposed to having moved it to Applications (if you are on Mac)
The payment amount has been entered incorrectly. In Solidity, decimals are not supported, so the correct input should be 0.005 * 10^18 wei (equivalent to 0.005 Ether). This is the appropriate solution.
Similar to the answers above, you need a heatmap pivot table — but, to get the days of the week sorted in the correct order, you do not use the Day of Week dimension. ("Day of Week" is a text-formatted dimension and therefore will always sort alphabetically.)
What you want to use as your column is a formatted Date dimension, which is formatted as datetime. (For Google Ads, use "Day".) Select "Date" as your base dimension, then go into the dimension details and change the following:
Data Type: Date & Time > Day of Week
Display Format: Day Name (or "Day Name abbreviated" for Sun, Mon, Tues, etc.)
Then sort your chart's columns by Date > Ascending.
This should give you the days of the week in chronological order, starting with Sunday.
Found the root cause.
Redisson was adding extra characters while encoding https://redisson.pro/docs/data-and-services/data-serialization/?utm_source=chatgpt.com
Plain text codec works
config.setCodec(new StringCodec());
config.useSingleServer()
.setAddress(redisURL)
.setConnectionPoolSize(30);
RedissonClient redisson = Redisson.create(config);
At least for your HTML, you need to have a . in your CSS link. The correct link would look like this:
<link rel="stylesheet" href="./styles/main_style.css">
I'm unsure about the image, but I'd assume that you'd also need a . in the CSS file's link to the image.
I'm Alex, the Developer Relations Program Manager for Apigee.
I realize this is an older thread, but for those still interested in Apigee migration, we're hosting a session tomorrow, March 13th at 4:00 PM CET (11:00 AM EDT) on the Apigee Migration Assessment tool. An expert will be providing insights and answering questions. Register here: https://rsvp.withgoogle.com/events/apigee-emea-office-hours-2024
add quotes like Emad Kerhily already said
the below code with reference to above shows a better quality image
Row.Builder().setImage(getIcon(type),Row.IMAGE_TYPE_LARGE)
@mk12, using your above script, I first ran the following command in gnome terminal
curl https://www.broadcastify.com/listen/feed/41475
which yielded what appears to be all of the html source code for the page.
next, I tried adapting your script to the following, which resulted in a blank line followed by the command prompt.
This is what i have:
auth=$(curl -s "https://www.broadcastify.com/listen/feed/41475/" \
| grep webAuth \
| head -n 1 \
| sed 's/^.*"webAuth": "//;s/".*$//')relay_url=$(curl -s "https://www.broadcastify.com/listen/41475" \
-H "webAuth: $auth" -d 't=14' \
| grep -o 'http://[^"]*')audio_url=$(curl -s "$relay_url" | cut -d' ' -f5)
echo "$audio_url"
I have tried replacing the feed number with the $1 variable, and it yielded an error in the cli.
One thing I did notice, was a curious...well what seems to me to be a variable portion of an audio stream link.
link = "https://audio.broadcastify.com/" + a.id + ".mp3";
can you or anyone else shed some light on the "a.id" portion of the link?
Thank you
This error can be avoided by upgrading Pandas, using pip install --upgrade pandas.
Update 2025:
If you encounter the error Possible unrecognized qualifier, searching for this term literally , then you need to use path: instead of filename:
e.g
your_search_term -path:package-lock.json
Did you find a solution for this?
This was solved by upgrading Pandas through pip install --upgrade pandas
enclose the variable in quotes so that the substituted value is recognized as a string and not as a number:
{
"TicketTitle": "Main Title",
"DateCreated": "{{timestamp}}",
"CreatedBy": "John Doe"
}
Intuition: We know that for a given network N(G(V,E),s,t,c), anf a maximum flow f, every minimal cut in N is saturated under f.
As fir out problem, we can infer that the cut {s},V-{s} is not necessarily minimal - What if we create a network in which the edges adjacent to s are much larger than those of a minimal cut?
Try to think of a counter example satisfying this condition.
This can be solved by upgrading to the latest version of Pandas, through pip install --upgrade pandas.
I just ran into the same problem. This works:
@get:io.micronaut.data.annotation.Transient
val batchId: String?
get() = batchIdUUID?.toString() ?: batchIdString
This is not an error. This just means that you already have the latest version of pip.
Did you checked every option in the SES configuration set?
https://docs.aws.amazon.com/ses/latest/dg/using-configuration-sets.html
Is the identity that you are using attached to the configuration?
Try running python3 -m pip install --upgrade pip and then run pip3 -V or pip3.7 -V to check your version. This will make sure that the you are upgrading and checking the version of the pip which actually matches the Python version you are using (3.7 in this case).
Resolved "Operand type clash: int is incompatible with date" by using sql CONVERT function.
convert(datetime,@{variables('date_value')});
I ran "ping finance.yahoo.com" and received 100% packet loss. I realized my organization's Wi-Fi was blocking my requests.
Try running python -m pip install --upgrade pip or python -m pip install --upgrade pip --no-cache-dir to upgrade pip.
If that doesn't solve your problem, try also running pip install --upgrade setuptools to upgrade setuptools.
There isn't much you can do about this now. You'll need install all the packages again for the new Python version. For the future, you can follow the instructions at Keeping Installed Packages When Updating Python to avoid this happening again.
It looked like a small mistake and it was. Make sure you import the correct `select` function.
Wrong
from sqlalchemy import select
Correct
from sqlmodel import select
For future visitors who need to persist the session in Playwright after a manual Google login and avoid the block:
Couldn't sign you in. For your protection, you can't sign in from this device. Try again later, or sign in from another device.
I share this solution inspired by the answers from jwallet and Jose A, as their solutions did not work for me as of today separately.
This code allows you to manually log in to Google and save the session state for reuse in future executions. It uses Playwright Extra and a random User-Agent to minimize blocks.
import { createRequire } from 'module';
const require = createRequire(import.meta.url);
const UserAgent = require('user-agents');
import { chromium } from 'playwright-extra';
const optionsBrowser = {
headless: false,
args: [
'--disable-blink-features=AutomationControlled',
'--no-sandbox',
'--disable-web-security',
'--disable-infobars',
'--disable-extensions',
'--start-maximized',
'--window-size=1280,720',
],
};
const browser = await chromium.launch(optionsBrowser);
const optionsContext = {
userAgent: new UserAgent([/Chrome/i, { deviceCategory: 'desktop' }]).userAgent,
locale: 'en-US',
viewport: { width: 1280, height: 720 },
deviceScaleFactor: 1,
};
const context = await browser.newContext(optionsContext);
const page = await context.newPage();
await page.goto('https://accounts.google.com/');
// Give 90 seconds to complete the manual login.
const waitTime = 90_000;
console.log(`You have ${waitTime / 1000} seconds to complete the manual login...`);
await page.waitForTimeout(waitTime);
// Save the session
await page.context().storageState({ path: 'myGoogleAuth.json' });
console.log('Session saved in myGoogleAuth.json');
await browser.close();
Once the session is saved, you can reuse it by loading the myGoogleAuth.json file in future executions:
const context = await browser.newContext({ storageState: 'myGoogleAuth.json' });
This prevents having to log in every time you run the script.
If the session does not persist, check if Google has invalidated the cookies and try logging in again to generate a new storageState.
Had to add in about:config "general.useragent.override.localhost" to what the flash file wanted. GameVerStage/9.9.9
I swapped the JSON payload with a different one from https://jsonplaceholder.typicode.com/todos and modified the code to work with the different payload and it worked. I can only conclude that the initial payload is buggy.
I had this error and I eventually found the problem was due to using spaces to separate the columns in the VCF data. Tabs are actually required by the spec and hence implementations for reading VCF.
I was finally able to fix it by adjusting the routes of the livewire component.
No help - but I just got this issue too.
Make sure your project is set to use C++17:
Right-click your project in Solution Explorer.
Go to Properties → C/C++ → Language.
Set C++ Language Standard to ISO C++17 Standard (/std:c++17) or later.
If you want to both put the current line and update the diff, you can combine them like so
:.diffput|diffupdate
Please, delete nodeJS and NextJS from ur PC, thx
If you have a dump file, follow the instructions for a backup restore here.
Session::forget('value')
but it didn't delete
However, when i tried using save method like so
Session::forget('value')
Session::save()
it worked! (I.e. the value was deleted from the session.)
Please - what am I doing wrong? I don't see the save method in the Laravel documentation when using Session::flush() and Session::forget().
$db = db_connect();
$builder = $db->table($this->table)->select('id,CONCAT(nombre,\' \',ap,\' \',am) as nombre,email,dependencia,area');
$resultado = $builder->get();
el detalle es escapar las comillas para poner espacios y esta es una solucion cuando usas los helpers en Codeigniter:
\'espacio\' = mario gonzalez gonzalez
I found this fix:
// Read Text test - Read the text from a txt file and add create a new txt file
// with the original txt file data
public void ReadAndWriteTextTest()
{
// Get the root path
var webRoot = _env.WebRootPath;
// Text with this txt file which is a copy of the MS Word document
// what_we_do_and_how.docm
var inputfile = System.IO.Path.Combine(webRoot, "What_we_do_and_how ENCODED.txt");
// The txt file name to be created.
var outputfile = System.IO.Path.Combine(webRoot, "A6CWrite.txt");
// RC@202503100000 - Get the 1252 specific code page to use with Encoding.RegisterProvider
Encoding.RegisterProvider(CodePagesEncodingProvider.Instance);
// Read the input txt file data
string inputdata = System.IO.File.ReadAllText(inputfile, System.Text.Encoding.GetEncoding(1252));
// Create and write to the ouput txt file
System.IO.File.WriteAllText(outputfile, inputdata, System.Text.Encoding.GetEncoding(1252));
}
the outputfile now shows the correct encoding :
I've experienced some peculiar performance issues.
It's a bit like MSSQL-Gate compared to "Diesel-Gate"
The thing I've experience is that when memory usage increases to > 80% (by other processes) MSSQL starts to consume CPU without even being used. It's looks like it starts to "touch" all its memory blocks to keep it loaded in RAM. This will degrade all other running processes. This behavior will stop as soon as the memory usage gets lower.
I was looking at the wrong template it was our custon filter that did not use the richtext filter - resolved now
I've tried everything and in the end my problem was that my emulator was not connected to the internet, so make sure to check that! I followed the instructions here to solve the problem:
If you are on Mac - try this - GoTo Apple Icon -> System Preferences -> Network Click on the gear icon on the and select 'Set Service Order' Bring the active interface before other interface. Restart the Android Emulator.
$url_corrigida = str_replace("/", "%2F", $url_original);
Now a new problem in this context: When i dropping folders onto the drop zone, the WebFrame is unloaded after a short period (usually after 1–3 seconds), resulting in a white window. I have global listeners for all drag events, and all of them call preventDefault.
Here are a few examples:
useEffect(() => {
const preventDefaults = (e) => {
e.preventDefault();
};
const dropzone = document.getElementById("select_folder_btn");
["dragenter", "dragstart", "dragend", "dragleave", "dragover", "drag", "drop"].forEach((eventName) => {
dropzone.addEventListener(eventName, preventDefaults);
window.addEventListener(eventName, preventDefaults);
});
});
function handleDropAsync(event: React.DragEvent) {
const droppedItemsPaths = window.electronAPI.getFilePaths(event.dataTransfer.files);
console.log("droppedItems: ", droppedItemsPaths);
Promise.all(
droppedItemsPaths.map((path) =>
window.electronAPI.isDirectory(path).then((isDir) => {
return isDir ? Promise.resolve(path) : window.electronAPI.dirname(path);
})
)
).then((directories) => {
directories = directories.filter((value, index, self) => self.indexOf(value) === index);
if (directories.length > 0) window.electronAPI.invoke("start-analysis", directories);
});
}
The following error is produced by Electron:
Error sending from webFrameMain: Error: Render frame was disposed before WebFrameMain could be accessed
Does anyone have an Electron/React solution to prevent the window from unloading when a drop event occurs?
i delete c:\flutter folder and install flutter , its works for me
Nowadays I use Playback.
It captures func args, local variables and more, with just #>> macro added in the right place. Also includes function replay.
The values that you traces are placed into Portal, where even large data structures can be easily browsed and reused.
The suppress-ts-errors package does just that. In your TypeScript project directory do:
npx suppress-ts-errors
I have the same problem, but nobody answers yet.
I tried to reproduce the high latency you report with my own Cloud Run Service. Here is my very simple service code:
const express = require('express');
const {PubSub} = require('@google-cloud/pubsub');
const app = express();
const pubsub = new PubSub();
const topic_name = // ... my topic name
const topic =
pubsub.topic(topic_name);
app.post('/', (req, res) => {
const data = JSON.stringify(Math.random());
const message = {
data: Buffer.from(data),
};
console.log('Publish %s start', data);
topic.publishMessage(message)
.then(() => {
console.log('Publish %s done', data);
res.status(200).end();
})
.catch(e => {
console.log('Publish %s failed', data);
res.status(500).end();
});
});
const port = parseInt(process.env.PORT) || 8080;
app.listen(port, () => {
console.log(`cloud-run-service: listening on port ${port}`);
});
What I observe is that the first request to the Cloud Run Service incurs high latency, but subsequent publishes are faster. For the first request, between the "Publish Start" and "Publish Done" logs is ~400ms. When I continue to POST to my Cloud Run Service (at a very slow, 1 request per minute), the subsequent publishes all complete much faster (~50ms).
This is still very low throughput for Pub/Sub and the advice from [1] still applies:
> Pub/Sub is designed for low-latency, high-throughput delivery. If the topic has low throughput, the resources associated with the topic could take longer to initialize.
But the publish latency for subsequent publish requests is much better than the "Cold Start" latency for the Cloud Run Instance / Publisher object.
With regards to your question:
> I have read that pubsub performs poorly under low throughput, but is there a way to make it behave better?
Pub/Sub is optimized for high throughput, but even the very low QPS of my test (1 request per minute) was able to achieve 50ms latencies.
You can get lower latencies by publishing consistently, but it is a latency/cost tradeoff. If you consistently publish "heartbeat" messages to your topic to keep the Cloud Run Instance and Pub/Sub resources "warm", you will get lower single request latencies when you send a real publish request.
You can do this without having to handle those additional meaningless "heartbeat" messages at your subscriber by using filters with your subscription [2] . If you publish messages with an attribute indicating it is a "heartbeat" message, you can create a subscription which filters out the message before it reaches your subscriber. Your single request publish latency from your Cloud Run Service should be consistently lower, but you would have to pay for the extra publish traffic and filtered out "heartbeat" messages [3].
[1] https://cloud.google.com/pubsub/docs/topic-troubleshooting#high_publish_latency
[2] https://cloud.google.com/pubsub/docs/subscription-message-filter
[3] https://cloud.google.com/pubsub/pricing#pubsub
You can also use a pivot table.
It's incredible that it doesn't work is the base of Generics principle
I have a generics class <S,T> where S is a request body and T is response body of a webclient in a reactive mode get new Bearer Token or return latest token, if it doens't expried, and calling server by that token without using .block*() methods in no blocking mode.
In my class I need to use my class which is Client<S, T> and a method return T type responsebody and pass requestbody parameter as S type but using T in below and it doesn't work:
public Mono<T> getResponse( @NotBlank String logHash,
@NotBlank String url,
@NotNull @NotBlank @NotEmpty S requestBody,
@NotNull @NotBlank @NotEmpty Class<T> responseBodyClass) {
return getToken()
.flatMap(token ->
webClient
.post()
.uri(url)
.header("Authorization", "Bearer " + token)
.body(BodyInserters.fromValue(requestBody))
.retrieve()
.onStatus(HttpStatus.INTERNAL_SERVER_ERROR::equals, response -> response.bodyToMono(String.class).map(Exception::new))
.onStatus(HttpStatus.BAD_REQUEST::equals, response -> response.bodyToMono(String.class).map(Exception::new))
.bodyToMono(new ParameterizedTypeReference<T>() {})
.doOnSuccess(response -> log.debug("{} full url {} response {}", logHash, this.serverUrl + url, response))
.doOnError(ex -> {
log.error("{} {}", logHash, ex.getMessage());
ex.printStackTrace();
})
)
.doOnSuccess(token -> log.debug("{} url {} response {}", logHash, this.serverUrl + url, token))
.doOnError(ex -> {
log.error("{} {}", logHash, ex.getMessage());
ex.printStackTrace();
});
I need to substitute in previous code this line (It's incredible because type must be part of this bean customized dinamically by Spring Framework AOP in the code, I think that a solution for AOP Spring Framework is add a final object Class<T> and in runtime substitute T with correct class passed in constructor by @Autowired annotation as private final value not only verify type class on return):
.bodyToMono(new ParameterizedTypeReference<T>() {})
with responseBody Class Type which is a .class of my responseBody to avoid exception because JVM can't find T type for return body object:
.bodyToMono(responseBody)
What type of Generics Type is this implementatio?
I autowire class I pass Type in autowiring how is possibile that this can't find Type?
new ParameterizedTypeReference<T>() {}
caller:
private Client<CheckCustomerBody, ApimResponse> client = null;
And in method that use client it I need to passawhen call it
client = new Client<CheckCustomerBody, ApimResponse>();
client.configure( WebClient.builder(),
ClientType.XXXX_XX,
channelEnum,
logHash,
Optional.empty(),
Optional.empty(),
Optional.empty()
);
return client
.getResponse( logHash,
WebOrderConstants.API_EXT_CHECK,
request,
ApimResponse.class)
.map(apimResponse -> validationProductChange(
apimResponse.getResponse(),
customer.getClientAccountCode(),
logHash
)
? "OK"
: "KO"
)
.doOnError(ex -> {
log.error("{} END {} : Error using d365 service", logHash, prefix);
ex.printStackTrace();
});
I just ran into this new license concept while playing with latest version of the Firegiant toolset.
One price I found, and only price, was 6500/year which sounds crazy.
I followed directions to generate a license, but response was I needed to contact my organization's administrator to get a license. So that was of zero help.
Has anyone gotten a license price quote?
=TEXT(IF(AND(MINUTE(F2)>=15,MINUTE(F2)<45),HOUR(F2)&":30",
IF(MINUTE(F2)>=45,HOUR(F2)+1&":00",
HOUR(F2)&":00")),
"H:MM AM/PM")
A question might this approach lead to a VHDL description of the network.
Simulating the dynamics of large networks is going to be computationally expensive, possibly implementing this in hardware, maintaining the state space description is useful.
This regex filters all other browsers like chrome, brave, firefox on iOS to identify Safari on iOS exclusively.
/iP(ad|hone|od)/.test(window?.navigator?.platform) && !/iOS\//.test(userAgent);
The issue was that the brew formula for [email protected] caused an error with dyld. To solve the problem, I built a boost from the source, installed it, and set LDFLAGS to contain the installation path.
On Ubuntu 24.04.2 I tried a few things and eventually uninstalled the snap package and followed https://github.com/neovim/neovim/blob/master/INSTALL.md#ubuntu, everything worked fine after that.
Xcode 16.2, iOS 18.3.1
This works for me:
Turn off all VPN connections
Turn off recording, SSL Proxying and MacOS Proxy at Charles (also will work with Proxyman, I think).
Also I have tried, but didn't work for me:
Disabling and Enabling Developer Mode
sudo chmod a+w /private/var/tmp/
Remove and restore DeveloperDiskImages folder at Library/Developer
sudo chmod a+w /private/var/tmp/
restart iPhone many times
I am using this code, and it works very well.
Now, due to an expansion and the use of a multi-currency system, it is necessary for the value currently displayed in Euros to change. When an order is placed from the USA, the value from the code should be converted to USD, or when the website language is switched to Czech, it should be converted to Czech koruna at a predefined exchange rate.
How can this be achieved? Currently, the value is displayed without being converted based on the exchange rate of the selected currency.
If the value is set to 100 EUR, when switched to USD, it still shows 100 USD, or when switched to Czech, it shows 100 Czech koruna.
Let longitud = 2
Let ancho = 4
Let área = longitud x ancho
Let perímetro 2 x (longitud + ancho)
Console.log ("el área es:")
Console.log ("el perímetro:")
Sorry guys, but I have managed to beat the performance of all your solutions.
First, some details about my computer: CPU Intel(R) Core(TM) i5-4430 CPU @ 3.00GHz, RAM Kingston DDR3 8GiB * 2, OS Windows 11 Pro 23H2 x64, CPython 3.12.6 x64, NumPy 2.0.2, I am using IPython 8.27.0 on Windows Terminal.
I have tried to implement my idea mentioned in my latest edit to the question, and surprisingly I have actually beaten all submitted solutions and even np.unpackbits...
First I will describe the ways I have found to generate sequence of integers with periodic gaps.
Say you want to generate a list of numbers that includes every other group of width numbers, for example, if the width is 5, if we include the first five natural numbers, we get 0, 1, 2, 3, 4, then we skip the next five numbers and we append 10, 11, 12, 13, 14 to the list, then we skip another five numbers and append 20, 21, 22, 23, 24...
Each group of included numbers has width of 5, and they all start at a multiple of 5. I have identified that the groups that are included have a floor division of zero when divided by 5.
So if I want every other group of five integers starting at 7 ending at 100 I can do this:
In [35]: nums = np.arange(100)
In [36]: ~((nums - 7) // 5) & 1
Out[36]:
array([1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0,
0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0,
0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 1,
1, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1])
But this is inefficient (and the first 2 bits should be off). NumPy arrays can be added, so I should instead do this:
In [37]: np.arange(7, 100, 10)[:, None] + np.arange(5)
Out[37]:
array([[ 7, 8, 9, 10, 11],
[ 17, 18, 19, 20, 21],
[ 27, 28, 29, 30, 31],
[ 37, 38, 39, 40, 41],
[ 47, 48, 49, 50, 51],
[ 57, 58, 59, 60, 61],
[ 67, 68, 69, 70, 71],
[ 77, 78, 79, 80, 81],
[ 87, 88, 89, 90, 91],
[ 97, 98, 99, 100, 101]])
But both are inefficient:
In [38]: %timeit column = np.arange(65536, dtype=np.uint16); ~((column - 64) >> 7) & 1
219 μs ± 15.9 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [39]: %timeit np.arange(64, 65536, 256, dtype=np.uint16)[:, None] + np.arange(128)
61.2 μs ± 2.97 μs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [40]: %timeit np.tile(np.concatenate([np.zeros(64, dtype=bool), np.ones(64, dtype=bool)]), 512)
17.9 μs ± 662 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
Though as you can see we need to use np.concatenate if the starting position of the tiling isn't a multiple of the tile's length.
I have implemented 5 new functions, binary_codes_8 implements the above idea, though it isn't efficient:
def binary_codes_8(n: int) -> np.ndarray:
dtype = get_dtype(n)
result = np.zeros(((length := 1 << n), n), dtype=bool)
width = 1
for i in range(n - 1, -1, -1):
result[:, i][
np.arange(width, length, (step := width << 1), dtype=dtype)[:, None]
+ np.arange(width)
] = 1
width = step
return result
In [41]: %timeit binary_codes_6(16)
1.14 ms ± 37.6 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [42]: %timeit binary_codes_8(16)
2.53 ms ± 41.6 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Instead I have a new idea, instead of generating a two dimensional array, we can just join the columns head to tail to a one dimensional array, this is to make memory access contiguous and to avoid broadcasting assignments per iteration.
def binary_codes_9(n: int) -> np.ndarray:
validate(n)
places = 1 << np.arange(n)
return (
np.concatenate(
[
np.tile(
np.concatenate([np.zeros(a, dtype=bool), np.ones(a, dtype=bool)]), b
)
for a, b in zip(places[::-1], places)
]
)
.reshape((n, 1 << n))
.T
)
In [43]: %timeit binary_codes_9(16)
910 μs ± 26.9 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
As you can see, it actually beats the np.unpackbits based solution.
gray_codes_4 implements the same idea as binary_codes_8, so it is also inefficient:
def gray_codes_4(n: int) -> np.ndarray:
dtype = get_dtype(n)
result = np.zeros(((length := 1 << n), n), dtype=bool)
width = 2
start = 1
for i in range(n - 1, 0, -1):
result[:, i][
np.arange(start, length, (step := width << 1), dtype=dtype)[:, None]
+ np.arange(width)
] = 1
width = step
start <<= 1
result[:, 0][length >> 1 :] = 1
return result
In [44]: %timeit gray_codes_4(16)
2.52 ms ± 161 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
So I tried to implement the idea of binary_codes_9 to gray_codes_5:
def gray_codes_5(n: int) -> np.ndarray:
m = n - 1
half = 1 << m
column = np.arange(1 << n, dtype=get_dtype(n))
offsets = (1 << np.arange(m - 1, -1, -1, dtype=np.uint8)).tolist()
return (
np.concatenate(
[
np.concatenate([np.zeros(half, dtype=bool), np.ones(half, dtype=bool)]),
*(~((column - a) >> b) & 1 for a, b in zip(offsets, range(m, 0, -1))),
]
)
.reshape((n, 1 << n))
.T
)
In [45]: %timeit gray_codes_5(16)
3.67 ms ± 60.2 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
It is somehow slower, but the idea is sound, just that the way each column is generated is inefficient.
So I tried again, and this time it even beats binary_codes_9:
def gray_codes_6(n: int) -> np.ndarray:
validate(n)
if n == 1:
return np.array([(0,), (1,)], dtype=bool)
half = 1 << (n - 1)
places = (1 << np.arange(n - 1, -1, -1)).tolist()
return (
np.concatenate(
[
np.zeros(half, dtype=bool),
np.ones(half, dtype=bool),
*(
np.tile(
np.concatenate(
[
np.zeros(b, dtype=bool),
np.ones(a, dtype=bool),
np.zeros(b, dtype=bool),
]
),
1 << i,
)
for i, (a, b) in enumerate(zip(places, places[1:]))
),
]
)
.reshape((n, 1 << n))
.T
)
In [46]: %timeit gray_codes_6(16)
759 μs ± 19.8 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
This is the benchmark of all the functions on my IPython interpreter:
In [7]: %timeit binary_codes_0(16)
1.62 ms ± 58.8 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [8]: %timeit binary_codes_1(16)
829 μs ± 9.4 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [9]: %timeit binary_codes_2(16)
1.9 ms ± 67.8 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [10]: %timeit binary_codes_3(16)
1.55 ms ± 9.63 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [11]: %timeit binary_codes_4(16)
1.66 ms ± 40.1 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [12]: %timeit binary_codes_5(16)
1.8 ms ± 54.5 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [13]: %timeit binary_codes_6(16)
1.11 ms ± 22.3 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [14]: %timeit binary_codes_7(16)
7.01 ms ± 46.5 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [15]: %timeit binary_codes_8(16)
2.5 ms ± 57.8 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [16]: %timeit binary_codes_9(16)
887 μs ± 5.43 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [17]: %timeit gray_codes_0(16)
1.65 ms ± 11.9 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [18]: %timeit gray_codes_1(16)
1.79 ms ± 9.98 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [19]: %timeit gray_codes_2(16)
3.97 ms ± 28.4 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [20]: %timeit gray_codes_3(16)
1.9 ms ± 49.6 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [21]: %timeit gray_codes_4(16)
2.38 ms ± 33.4 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [22]: %timeit gray_codes_5(16)
3.95 ms ± 19.2 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [23]: %timeit gray_codes_6(16)
718 μs ± 4.91 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [24]: %timeit JR_gray_codes_1(16)
1.42 ms ± 10.1 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [25]: %timeit gray_codes_nocomment(16)
1.03 ms ± 4.88 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [26]: %timeit JR_gray_codes_2()
809 μs ± 12.9 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
As you can see, my solutions beat all functions from the other answers.
But how do they scale with larger inputs? To find out, I reused code from my last answer:
import colorsys
import matplotlib.pyplot as plt
import numpy as np
import timeit
from math import ceil
from scipy.interpolate import make_interp_spline
def measure_command(func, *args, **kwargs):
elapsed = timeit.timeit(lambda: func(*args, **kwargs), number=5)
once = elapsed / 5
if elapsed >= 1:
return int(1e9 * once + 0.5)
times = min(1024, ceil(1 / once))
return int(
1e9 * timeit.timeit(lambda: func(*args, **kwargs), number=times) / times + 0.5
)
def test(func):
return [measure_command(func, i) for i in range(1, 21)]
bin6 = binary_codes_9(6)
gra6 = gray_codes_6(6)
execution_times = {}
to_test = [
*((f"binary_codes_{i}", bin6) for i in range(10)),
*((f"gray_codes_{i}", gra6) for i in range(7)),
("JR_gray_codes_1", gra6),
("gray_codes_nocomment", gra6),
]
for func_name, correct in to_test:
func = globals()[func_name]
assert np.array_equal(func(6), correct)
execution_times[func_name] = test(func)
cost_per_item = {
k: [e / (1 << i) for i, e in enumerate(v, start=1)]
for k, v in execution_times.items()
}
largest_execution = sorted(
[(k, v[-1]) for k, v in execution_times.items()], key=lambda x: x[1]
)
average_execution = sorted(
[(k, sum(v) / 20) for k, v in cost_per_item.items()], key=lambda x: x[1]
)
columns = [
sorted([v[i] for v in execution_times.values()], reverse=True) for i in range(20)
]
overall_performance = [
(k, [columns[i].index(e) for i, e in enumerate(v)])
for k, v in execution_times.items()
]
overall_execution = sorted(
[(a, sum(b) / 36) for a, b in overall_performance], key=lambda x: -x[1]
)
In [14]: average_execution
Out[14]:
[('binary_codes_6', 206.31875624656678),
('gray_codes_nocomment', 292.46834869384764),
('binary_codes_5', 326.2715059280396),
('binary_codes_4', 425.4694920539856),
('gray_codes_1', 432.8871788024902),
('gray_codes_4', 440.75454263687135),
('binary_codes_3', 486.1538872718811),
('JR_gray_codes_1', 505.06243762969973),
('gray_codes_0', 518.2342618465424),
('gray_codes_3', 560.9797175884247),
('binary_codes_2', 585.7214835166931),
('binary_codes_8', 593.8069385528564),
('gray_codes_5', 1012.6498884677887),
('gray_codes_6', 1102.2803171157836),
('binary_codes_0', 1102.6035027503967),
('gray_codes_2', 1152.2696633338928),
('binary_codes_1', 1207.228157234192),
('binary_codes_7', 1289.8271428585053),
('binary_codes_9', 1667.4837736606598)]
In [15]: [(a, b / 1e9) for a, b in largest_execution]
Out[15]:
[('gray_codes_6', 0.01664672),
('binary_codes_9', 0.017008553),
('gray_codes_nocomment', 0.0200915),
('binary_codes_1', 0.025319672),
('binary_codes_6', 0.027002585),
('gray_codes_3', 0.038912479),
('binary_codes_2', 0.045456482),
('JR_gray_codes_1', 0.053382224),
('binary_codes_3', 0.054410716),
('binary_codes_4', 0.0555085),
('gray_codes_0', 0.058621065),
('binary_codes_0', 0.0718396),
('binary_codes_5', 0.084661),
('binary_codes_8', 0.085592108),
('gray_codes_1', 0.088250008),
('gray_codes_4', 0.091165908),
('gray_codes_2', 0.093191058),
('binary_codes_7', 0.104509167),
('gray_codes_5', 0.146622829)]
In [16]: overall_execution
Out[16]:
[('binary_codes_6', 9.055555555555555),
('gray_codes_nocomment', 8.88888888888889),
('JR_gray_codes_1', 7.5),
('binary_codes_5', 6.972222222222222),
('binary_codes_4', 6.527777777777778),
('gray_codes_1', 6.0),
('binary_codes_3', 5.916666666666667),
('gray_codes_0', 5.805555555555555),
('gray_codes_3', 4.888888888888889),
('gray_codes_6', 4.555555555555555),
('gray_codes_4', 4.25),
('binary_codes_1', 4.083333333333333),
('binary_codes_2', 4.0),
('binary_codes_9', 3.7222222222222223),
('binary_codes_8', 3.6944444444444446),
('binary_codes_0', 3.361111111111111),
('gray_codes_2', 2.611111111111111),
('gray_codes_5', 2.2777777777777777),
('binary_codes_7', 0.8888888888888888)]
This is puzzling, clearly my new functions gray_codes_6, binary_codes_9 perform the best for the largest input (20, it means the output will have 1048576 rows), but according to my metrics they somehow score poorly...
Just to sanity check:
In [17]: %timeit binary_codes_1(16)
908 μs ± 24.2 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [18]: %timeit binary_codes_6(16)
1.12 ms ± 8.18 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [19]: %timeit binary_codes_9(16)
925 μs ± 12.4 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [20]: %timeit binary_codes_1(20)
17.3 ms ± 205 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [21]: %timeit binary_codes_6(20)
28.6 ms ± 233 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [22]: %timeit binary_codes_9(20)
23 ms ± 753 μs per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [23]: %timeit gray_codes_6(16)
854 μs ± 23.5 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [24]: %timeit JR_gray_codes_1(16)
1.69 ms ± 34 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [25]: %timeit gray_codes_nocomment(16)
1.11 ms ± 31 μs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
In [26]: %timeit gray_codes_6(20)
15.9 ms ± 959 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [27]: %timeit gray_codes_nocomment(20)
20.5 ms ± 640 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [28]: %timeit JR_gray_codes_1(20)
48.5 ms ± 4.15 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Indeed, they performed better for larger inputs.
So I tried to graph the performance, the graphs are kind of busy, a lot goings on, but I saw something:
ymax = -1e309
for v in execution_times.values():
ymax = max(max(v), ymax)
rymax = ceil(ymax)
fig, ax = plt.subplots()
ax.axis([0, 20, 0, ymax])
ax.set_xticks(range(1, 21))
ax.set_xticklabels([f"{1<<i}" for i in range(1, 21)])
for k, v in execution_times.items():
x = np.linspace(1, 20, 256)
plt.plot(x, make_interp_spline(range(1, 21), v)(x), label=k)
plt.legend()
plt.show()
fig, ax = plt.subplots()
ax.axis([0, 20, 0, 19])
ax.set_xticks(range(1, 21))
ax.set_yticks(range(20))
ax.set_xticklabels([f"{1<<i}" for i in range(1, 21)])
for k, v in overall_performance:
plt.plot(range(1, 21), v, label=k)
plt.legend()
plt.show()
My smart functions have flatter curves than the inefficient ones, but they are slower for small inputs than the inefficient ones, but the inefficient ones grow much faster than my smart functions.
In my cost_per_item, in order to get a sensible average of the different execution times for different inputs, I divided the execution time for each input by the corresponding output size, so I get some average number...
But this is wrong, the functions don't scale linearly, the bigger the input, the harder it is to finish the execution in record time.
And in overall_execution the scoring is also wrong, we care about how does a function scale for ever larger inputs, so if a function completes faster for smaller inputs, it should have less importance:
In [35]: overall_execution1 = overall_execution = sorted(
...: [(a, sum(c << i for i, c in enumerate(b))) for a, b in overall_performance], key=lambda x: -x[1]
...: )
In [36]: overall_execution1
Out[36]:
[('gray_codes_6', 18462984),
('binary_codes_9', 17880641),
('gray_codes_nocomment', 16642805),
('binary_codes_1', 15627986),
('binary_codes_6', 14995433),
('gray_codes_3', 13092872),
('binary_codes_3', 11226678),
('binary_codes_2', 11189287),
('JR_gray_codes_1', 10861503),
('binary_codes_4', 9194493),
('binary_codes_0', 9184718),
('gray_codes_0', 8130547),
('binary_codes_5', 6373604),
('binary_codes_8', 5019700),
('gray_codes_1', 4305913),
('gray_codes_4', 3413060),
('gray_codes_2', 2567962),
('binary_codes_7', 925733),
('gray_codes_5', 210406)]
In [37]: average_execution1 = sorted(
...: [(k, sum(v) / 20) for k, v in execution_times.items()], key=lambda x: x[1]
...: )
In [38]: [(a, round(b / 1e9, 6)) for a, b in average_execution1]
Out[38]:
[('binary_codes_9', 0.001746),
('gray_codes_6', 0.001789),
('gray_codes_nocomment', 0.001967),
('binary_codes_1', 0.002462),
('binary_codes_6', 0.002567),
('gray_codes_3', 0.003661),
('binary_codes_2', 0.004313),
('binary_codes_3', 0.004558),
('JR_gray_codes_1', 0.004716),
('binary_codes_4', 0.005205),
('gray_codes_0', 0.005425),
('binary_codes_0', 0.005544),
('binary_codes_5', 0.007468),
('binary_codes_8', 0.007723),
('gray_codes_1', 0.007831),
('gray_codes_4', 0.008079),
('gray_codes_2', 0.0086),
('binary_codes_7', 0.009952),
('gray_codes_5', 0.013721)]
According to this new metric, it is undeniable that my binary_codes_9 and gray_codes_6 are much faster than the solutions provided by the other answers.
And I selected the fastest 9 functions to graph as proof:
In the last image, the vertical axis signifies how many functions have been beaten by the function corresponding to the graph.
My functions aren't beaten, but since the code from nocomment beats my original solutions I will accept that answer.
For multi-module Maven projects, specify the module using -pl and execute the Maven command from the root project. For example:
mvn test -pl=service -Dtest=InventoryServiceTest
In my case, I was omitting -pl and was seeing "No tests were executed!".
I am doing the authentication with google with firebase with fake token and with firebase local suite emulator for testing .
i remove this line in test class .
@get : Rule(order = 1)
val composeRule = createAndroidComposeRule<MainActivity>()
then my code work fine after wasting 1 week .